Training restricted boltzmann machines with multi-tempering: Harnessing parallelization

15Citations
Citations of this article
17Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Restricted Boltzmann Machines (RBM's) are unsupervised probabilistic neural networks that can be stacked to form Deep Belief Networks. Given the recent popularity of RBM's and the increasing availability of parallel computing architectures, it becomes interesting to investigate learning algorithms for RBM's that benefit from parallel computations. In this paper, we look at two extensions of the parallel tempering algorithm, which is a Markov Chain Monte Carlo method to approximate the likelihood gradient. The first extension is directed at a more effective exchange of information among the parallel sampling chains. The second extension estimates gradients by averaging over chains from different temperatures. We investigate the efficiency of the proposed methods and demonstrate their usefulness on the MNIST dataset. Especially the weighted averaging seems to benefit Maximum Likelihood learning. © 2012 Springer-Verlag.

Cite

CITATION STYLE

APA

Brakel, P., Dieleman, S., & Schrauwen, B. (2012). Training restricted boltzmann machines with multi-tempering: Harnessing parallelization. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 7553 LNCS, pp. 92–99). https://doi.org/10.1007/978-3-642-33266-1_12

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free