Improving language modelling with noise contrastive estimation

2Citations
Citations of this article
28Readers
Mendeley users who have this article in their library.

Abstract

Neural language models do not scale well when the vocabulary is large. Noise contrastive estimation (NCE) is a sampling-based method that allows for fast learning with large vocabularies. Although NCE has shown promising performance in neural machine translation, its full potential has not been demonstrated in the language modelling literature. A sufficient investigation of the hyperparameters in the NCE-based neural language models was clearly missing. In this paper, we showed that NCE can be a very successful approach in neural language modelling when the hyperparameters of a neural network are tuned appropriately. We introduced the 'search-then-converge' learning rate schedule for NCE and designed a heuristic that specifies how to use this schedule. The impact of the other important hyperparameters, such as the dropout rate and the weight initialisation range, was also demonstrated. Using a popular benchmark, we showed that appropriate tuning of NCE in neural language models outperforms the state-of-the-art single-model methods based on standard dropout and the standard LSTM recurrent neural networks.

Cite

CITATION STYLE

APA

Liza, F. F., & Grzes, M. (2018). Improving language modelling with noise contrastive estimation. In 32nd AAAI Conference on Artificial Intelligence, AAAI 2018 (pp. 5277–5284). AAAI press. https://doi.org/10.1609/aaai.v32i1.11967

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free