Slim embedding layers for recurrent neural language models

15Citations
Citations of this article
33Readers
Mendeley users who have this article in their library.

Abstract

Recurrent neural language models are the state-of-the-art models for language modeling. When the vocabulary size is large, the space taken to store the model parameters becomes the bottleneck for the use of recurrent neural language models. In this paper, we introduce a simple space compression method that randomly shares the structured parameters at both the input and output embedding layers of the recurrent neural language models to significantly reduce the size of model parameters, but still compactly represent the original input and output embedding layers. The method is easy to implement and tune. Experiments on several data sets show that the new method can get similar perplexity and BLEU score results while only using a very tiny fraction of parameters.

Cite

CITATION STYLE

APA

Li, Z., Kulhanek, R., Zhao, Y., Wang, S., & Wu, S. (2018). Slim embedding layers for recurrent neural language models. In 32nd AAAI Conference on Artificial Intelligence, AAAI 2018 (pp. 5220–5228). AAAI press. https://doi.org/10.1609/aaai.v32i1.12000

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free