Predefined sparseness in recurrent sequence models

2Citations
Citations of this article
73Readers
Mendeley users who have this article in their library.

Abstract

Inducing sparseness while training neural networks has been shown to yield models with a lower memory footprint but similar effectiveness to dense models. However, sparseness is typically induced starting from a dense model, and thus this advantage does not hold during training. We propose techniques to enforce sparseness upfront in recurrent sequence models for NLP applications, to also benefit training. First, in language modeling, we show how to increase hidden state sizes in recurrent layers without increasing the number of parameters, leading to more expressive models. Second, for sequence labeling, we show that word embeddings with predefined sparseness lead to similar performance as dense embeddings, at a fraction of the number of trainable parameters.

Cite

CITATION STYLE

APA

Demeester, T., Deleu, J., Godin, F., & Develder, C. (2018). Predefined sparseness in recurrent sequence models. In CoNLL 2018 - 22nd Conference on Computational Natural Language Learning, Proceedings (pp. 324–333). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/k18-1032

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free