Leveraging BERT with Mixup for Sentence Classification

8Citations
Citations of this article
8Readers
Mendeley users who have this article in their library.

Abstract

Good generalization capability is an important quality of well-trained and robust neural networks. However, networks usually struggle when faced with samples outside the training distribution. Mixup is a technique that improves generalization, reduces memorization, and increases adversarial robustness. We apply a variant of Mixup called Manifold Mixup to the sentence classification problem, and present the results along with an ablation study. Our methodology outperforms CNN, LSTM, and vanilla BERT models in generalization.

Cite

CITATION STYLE

APA

Jindal, A., Gnaneshwar, D., Sawhney, R., & Shah, R. R. (2020). Leveraging BERT with Mixup for Sentence Classification. In AAAI 2020 - 34th AAAI Conference on Artificial Intelligence (pp. 13829–13830). AAAI press.

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free