We show that small and shallow feed-forward neural networks can achieve near state-of-the-art results on a range of unstructured and structured language processing tasks while being considerably cheaper in memory and computational requirements than deep recurrent models. Motivated by resource-constrained environments like mobile phones, we showcase simple techniques for obtaining such small neural network models, and investigate different tradeoffs when deciding how to allocate a small memory budget.
CITATION STYLE
Botha, J. A., Pitler, E., Ma, J., Bakalov, A., Salcianu, A., Weiss, D., … Petrov, S. (2017). Natural language processing with small feed-forward networks. In EMNLP 2017 - Conference on Empirical Methods in Natural Language Processing, Proceedings (pp. 2879–2885). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/d17-1309
Mendeley helps you to discover research relevant for your work.