Robust Gram embeddings

2Citations
Citations of this article
87Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Word embedding models learn vectorial word representations that can be used in a variety of NLP applications. When training data is scarce, these models risk losing their generalization abilities due to the complexity of the models and the overfitting to finite data. We propose a regularized embedding formulation, called Robust Gram (RG), which penalizes overfitting by suppressing the disparity between target and context embeddings. Our experimental analysis shows that the RG model trained on small datasets generalizes better compared to alternatives, is more robust to variations in the training set, and correlates well to human similarities in a set of word similarity tasks.

Cite

CITATION STYLE

APA

Kekeç, T., & Tax, D. M. J. (2016). Robust Gram embeddings. In EMNLP 2016 - Conference on Empirical Methods in Natural Language Processing, Proceedings (pp. 1060–1065). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/d16-1113

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free