Training restricted boltzmann machines on word observations

ArXiv: 1202.5695
43Citations
Citations of this article
207Readers
Mendeley users who have this article in their library.

Abstract

The restricted Boltzmann machine (RBM) is a flexible model for complex data. However, using RBMs for high-dimensional multinomial observations poses significant computational difficulties. In natural language processing applications, words are naturally modeled by K-ary discrete distributions, where K is determined by the vocabulary size and can easily be in the hundred thousands. The conventional approach to training RBMs on word observations is limited because it requires sampling the states of K-way softmax visible units during block Gibbs updates, an operation that takes time linear in K. In this work, we address this issue with a more general class of Markov chain Monte Carlo operators on the visible units, yielding updates with computational complexity independent of K. We demonstrate the success of our approach by training RBMs on hundreds of millions of word n-grams using larger vocabularies than previously feasible with RBMs and by using the learned features to improve performance on chunking and sentiment classification tasks, achieving state-of-the-art results on the latter. Copyright 2012 by the author(s)/owner(s).

Cite

CITATION STYLE

APA

Dahl, G. E., Adams, R. P., & Larochelle, H. (2012). Training restricted boltzmann machines on word observations. In Proceedings of the 29th International Conference on Machine Learning, ICML 2012 (Vol. 1, pp. 679–686).

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free