We present a new learning algorithm for Mean Field Boltzmann Machines based on the contrastive divergence optimization criterion. In addition to minimizing the divergence between the data distribution and the equilibrium distribution, we maximize the divergence between one-step reconstructions of the data and the equilibrium distribution. This eliminates the need to estimate equilibrium statistics, so we do not need to approximate the multimodal probability distribution of the free network with the unimodal mean field distribution. We test the learning algorithm on the classification of digits. © Springer-VerlagBerlin Heidelberg 2002.
CITATION STYLE
Welling, M., & Hinton, G. E. (2002). A new learning algorithm for Mean Field Boltzmann Machines. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 2415 LNCS, pp. 351–357). Springer Verlag. https://doi.org/10.1007/3-540-46084-5_57
Mendeley helps you to discover research relevant for your work.