A Hybrid Stochastic Gradient Hamiltonian Monte Carlo Method

1Citations
Citations of this article
5Readers
Mendeley users who have this article in their library.

Abstract

Recent theoretical analyses reveal that existing Stochastic Gradient Markov Chain Monte Carlo (SG-MCMC) methods need large mini-batches of samples (exponentially dependent on the dimension) to reduce the mean square error of gradient estimates and ensure non-asymptotic convergence guarantees when the target distribution has a nonconvex potential function. In this paper, we propose a novel SG-MCMC algorithm, called Hybrid Stochastic Gradient Hamiltonian Monte Carlo (HSG-HMC) method, which needs merely one sample per iteration and possesses a simple structure with only one hyperparameter. Such improvement leverages a hybrid stochastic gradient estimator that exploits historical stochastic gradient information to control the mean square error. Theoretical analyses show that our method obtains the best-known overall sample complexity to achieve epsilon-accuracy in terms of the 2-Wasserstein distance for sampling from distributions with nonconvex potential functions. Empirical studies on both simulated and real-world datasets demonstrate the advantage of our method.

Cite

CITATION STYLE

APA

Zhang, C., Li, Z., Shen, Z., Xie, J., & Qian, H. (2021). A Hybrid Stochastic Gradient Hamiltonian Monte Carlo Method. In 35th AAAI Conference on Artificial Intelligence, AAAI 2021 (Vol. 12B, pp. 10842–10850). Association for the Advancement of Artificial Intelligence. https://doi.org/10.1609/aaai.v35i12.17295

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free