Unsupervised Data Augmentation with Naive Augmentation and without Unlabeled Data

13Citations
Citations of this article
80Readers
Mendeley users who have this article in their library.

Abstract

Unsupervised Data Augmentation (UDA) is a semi-supervised technique that applies a consistency loss to penalize differences between a model's predictions on (a) observed (unlabeled) examples; and (b) corresponding 'noised' examples produced via data augmentation. While UDA has gained popularity for text classification, open questions linger over which of its components are important, and how to extend the method to sequence labeling tasks; this paper addresses these questions. Our main contribution is an empirical study of UDA to establish which components of the algorithm confer benefits in NLP. Notably, although prior work has emphasized use of clever augmentation techniques including back-translation, we find that enforcing consistency between predictions assigned to observed and randomly substituted words often yields comparable (or greater) benefits compared to these more complex perturbation models. Furthermore, we find that applying UDA's consistency loss affords meaningful gains without any unlabeled data at all, i.e., in a standard supervised setting. In short, UDA need not be unsupervised to realize much of its noted benefits, and does not require complex data augmentation to be effective.

Cite

CITATION STYLE

APA

Lowell, D., Lipton, Z. C., Howard, B. E., & Wallace, B. C. (2021). Unsupervised Data Augmentation with Naive Augmentation and without Unlabeled Data. In EMNLP 2021 - 2021 Conference on Empirical Methods in Natural Language Processing, Proceedings (pp. 4992–5001). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2021.emnlp-main.408

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free