Discussion: Markov Chains for Exploring Posterior Distributions

  • Robert C
N/ACitations
Citations of this article
13Readers
Mendeley users who have this article in their library.

Abstract

JSTOR is a not-for-profit service that helps scholars, researchers, and students discover, use, and build upon a wide range of content in a trusted digital archive. We use information technology and tools to increase productivity and facilitate new forms of scholarship. For more information about JSTOR, please contact support@jstor.org. Several Markov chain methods are available for sampling from a posterior distribution. Two important examples are the Gibbs sampler and the Metropolis algorithm. In addition, several strategies are available for constructing hybrid algorithms. This paper outlines some of the basic methods and strategies and discusses some related theoretical and practical issues. On the theoretical side, results from the theory of general state space Markov chains can be used to obtain convergence rates, laws of large numbers and central limit theorems for estimates obtained from Markov chain methods. These theoretical results can be used to guide the construction of more efficient algorithms. For the practical use of Markov chain methods, standard simulation methodology provides several variance reduction techniques and also gives guidance on the choice of sample size and allocation.

Cite

CITATION STYLE

APA

Robert, C. P. (2007). Discussion: Markov Chains for Exploring Posterior Distributions. The Annals of Statistics, 22(4). https://doi.org/10.1214/aos/1176325753

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free