Natural language processing with small feed-forward networks

27Citations
Citations of this article
312Readers
Mendeley users who have this article in their library.

Abstract

We show that small and shallow feed-forward neural networks can achieve near state-of-the-art results on a range of unstructured and structured language processing tasks while being considerably cheaper in memory and computational requirements than deep recurrent models. Motivated by resource-constrained environments like mobile phones, we showcase simple techniques for obtaining such small neural network models, and investigate different tradeoffs when deciding how to allocate a small memory budget.

Cite

CITATION STYLE

APA

Botha, J. A., Pitler, E., Ma, J., Bakalov, A., Salcianu, A., Weiss, D., … Petrov, S. (2017). Natural language processing with small feed-forward networks. In EMNLP 2017 - Conference on Empirical Methods in Natural Language Processing, Proceedings (pp. 2879–2885). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/d17-1309

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free