Bayesian compression for natural language processing

6Citations
Citations of this article
113Readers
Mendeley users who have this article in their library.

Abstract

In natural language processing, a lot of the tasks are successfully solved with recurrent neural networks, but such models have a huge number of parameters. The majority of these parameters are often concentrated in the embedding layer, which size grows proportionally to the vocabulary length. We propose a Bayesian sparsification technique for RNNs which allows compressing the RNN dozens or hundreds of times without time-consuming hyperparameters tuning. We also generalize the model for vocabulary sparsification to filter out unnecessary words and compress the RNN even further. We show that the choice of the kept words is interpretable.

Cite

CITATION STYLE

APA

Chirkova, N., Lobacheva, E., & Vetrov, D. (2018). Bayesian compression for natural language processing. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, EMNLP 2018 (pp. 2910–2915). Association for Computational Linguistics. https://doi.org/10.18653/v1/d18-1319

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free