Neural Langevin Dynamical Sampling

11Citations
Citations of this article
10Readers
Mendeley users who have this article in their library.

Abstract

Sampling technique is one of the asymptotically unbiased estimation approaches for inference in Bayesian probabilistic models. Markov chain Monte Carlo (MCMC) is a kind of sampling methods, which is widely used in the inference of complex probabilistic models. However, current MCMC methods can incur high autocorrelation of samples, which means that the samples generated by MCMC samplers are far from independent. In this paper, we propose the neural networks Langevin Monte Carlo (NNLMC) which makes full use of the flexibility of neural networks and the high efficiency of the Langevin dynamics sampling to construct a new MCMC sampling method. We propose the new update function to generate samples and employ appropriate loss functions to improve the performance of NNLMC during the process of sampling. We evaluate our method on a large diversity of challenging distributions and real datasets. Our results show that NNLMC is able to sample from the target distribution with low autocorrelation and rapid convergence, and outperforms the state-of-the-art MCMC samplers.

Cite

CITATION STYLE

APA

Gu, M., & Sun, S. (2020). Neural Langevin Dynamical Sampling. IEEE Access, 8, 31595–31605. https://doi.org/10.1109/ACCESS.2020.2972611

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free