Aggregated gradient langevin dynamics

1Citations
Citations of this article
12Readers
Mendeley users who have this article in their library.

Abstract

In this paper, we explore a general Aggregated Gradient Langevin Dynamics framework (AGLD) for the Markov Chain Monte Carlo (MCMC) sampling. We investigate the nonasymptotic convergence of AGLD with a unified analysis for different data accessing (e.g. random access, cyclic access and random reshuffle) and snapshot updating strategies, under convex and nonconvex settings respectively. It is the first time that bounds for I/O friendly strategies such as cyclic access and random reshuffle have been established in the MCMC literature. The theoretic results also indicate that methods in AGLD possess the merits of both the low per-iteration computational complexity and the short mixture time. Empirical studies demonstrate that our framework allows to derive novel schemes to generate high-quality samples for large-scale Bayesian posterior learning tasks.

Cite

CITATION STYLE

APA

Zhang, C., Xie, J., Shen, Z., Zhao, P., Zhou, T., & Qian, H. (2020). Aggregated gradient langevin dynamics. In AAAI 2020 - 34th AAAI Conference on Artificial Intelligence (pp. 6746–6753). AAAI press. https://doi.org/10.1609/aaai.v34i04.6153

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free