A comparative study on regularization strategies for embedding-based neural networks

16Citations
Citations of this article
142Readers
Mendeley users who have this article in their library.

Abstract

This paper aims to compare different regularization strategies to address a common phenomenon, severe overhtting, in embedding-based neural networks for NLP. We chose two widely studied neural models and tasks as our testbed. We tried several frequently applied or newly proposed regularization strategies, including penalizing weights (embeddings excluded), penalizing embeddings, reembedding words, and dropout. We also emphasized on incremental hyperparameter tuning, and combining different regularizations. The results provide a picture on tuning hyperparameters for neural NLP models.

Cite

CITATION STYLE

APA

Peng, H., Mou, L., Li, G., Chen, Y., Lu, Y., & Jin, Z. (2015). A comparative study on regularization strategies for embedding-based neural networks. In Conference Proceedings - EMNLP 2015: Conference on Empirical Methods in Natural Language Processing (pp. 2106–2111). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/d15-1252

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free