Augmenting NLP models using Latent Feature Interpolations

20Citations
Citations of this article
70Readers
Mendeley users who have this article in their library.

Abstract

Models with a large number of parameters are prone to over-fitting and often fail to capture the underlying input distribution. We introduce Emix, a data augmentation method that uses interpolations of word embeddings and hidden layer representations to construct virtual examples. We show that Emix shows significant improvements over previously used interpolation based regularizers and data augmentation techniques. We also demonstrate how our proposed method is more robust to sparsification. We highlight the merits of our proposed methodology by performing thorough quantitative and qualitative assessments.

Cite

CITATION STYLE

APA

Jindal, A., Chowdhury, A. G., Didolkar, A., Jin, D., Sawhney, R., & Shah, R. R. (2020). Augmenting NLP models using Latent Feature Interpolations. In COLING 2020 - 28th International Conference on Computational Linguistics, Proceedings of the Conference (pp. 6931–6936). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2020.coling-main.611

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free