Langevin-Type Models II: Self-Targeting Candidates for MCMC Algorithms

  • Stramer O
  • Tweedie R
ISSN: 1387-5841
N/ACitations
Citations of this article
37Readers
Mendeley users who have this article in their library.

Abstract

The Metropolis-Hastings algorithm for estimating a distribution p is based on choosing a candidate Markov chain and then accepting or rejecting moves of the candidate to produce a chain known to have p as the invariant measure. The traditional methods use candidates essentially unconnected to p. We show that the class of candidate distributions, developed in Part I (Stramer and Tweedie 1999), which “self-target” towards the high density areas of p, produce Metropolis-Hastings algorithms with convergence rates that appear to be considerably better than those known for the traditional candidate choices, such as random walk. We illustrate this behavior for examples with exponential and polynomial tails, and for a logistic regression model using a Gibbs sampling algorithm. The detailed results are given in one dimension but we indicate how they may extend successfully to higher dimensions.

Cite

CITATION STYLE

APA

Stramer, O., & Tweedie, R. L. (1999). Langevin-Type Models II: Self-Targeting Candidates for MCMC Algorithms. Methodology and Computing in Applied Probability, 1(3), 307–328. Retrieved from http://dx.doi.org/10.1023/A:1010090512027

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free