FlipDA: Effective and Robust Data Augmentation for Few-Shot Learning

42Citations
Citations of this article
118Readers
Mendeley users who have this article in their library.

Abstract

Most previous methods for text data augmentation are limited to simple tasks and weak baselines. We explore data augmentation on hard tasks (i.e., few-shot natural language understanding) and strong baselines (i.e., pretrained models with over one billion parameters). Under this setting, we reproduced a large number of previous augmentation methods and found that these methods bring marginal gains at best and sometimes degrade the performance much. To address this challenge, we propose a novel data augmentation method FlipDA that jointly uses a generative model and a classifier to generate label-flipped data. Central to the idea of FlipDA is the discovery that generating label-flipped data is more crucial to the performance than generating label-preserved data. Experiments show that FlipDA achieves a good tradeoff between effectiveness and robustness-it substantially improves many tasks while not negatively affecting the others.

Cite

CITATION STYLE

APA

Zhou, J., Zheng, Y., Tang, J., Li, J., & Yang, Z. (2022). FlipDA: Effective and Robust Data Augmentation for Few-Shot Learning. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (Vol. 1, pp. 8646–8665). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2022.acl-long.592

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free