Optimal Convergence Rate of Hamiltonian Monte Carlo for Strongly Logconcave Distributions

17Citations
Citations of this article
13Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

We study the Hamiltonian Monte Carlo (HMC) algorithm for sampling from a strongly logconcave density proportional to e−f where f: ℝd → ℝ is μ-strongly convex and L-smooth (the condition number is κ = L/μ). We show that the relaxation time (inverse of the spectral gap) of ideal HMC is N(κ), improving on the previous best bound of N(κ1.5) (Lee et al., 2018); we complement this with an example where the relaxation time is Ω(κ), for any step-size. When implemented with an ODE solver, HMC returns an ε-approximate point in 2-Wasserstein distance using ˜N((κd)0.5 ε−1) gradient evaluations per step and ˜N((κd)1.5 ε−1) total time.

Cite

CITATION STYLE

APA

Chen, Z., & Vempala, S. S. (2022). Optimal Convergence Rate of Hamiltonian Monte Carlo for Strongly Logconcave Distributions. Theory of Computing, 18. https://doi.org/10.4086/toc.2022.v018a009

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free