GPT3Mix: Leveraging Large-scale Language Models for Text Augmentation

133Citations
Citations of this article
197Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Large-scale language models such as GPT3 are excellent few-shot learners, allowing them to be controlled via natural text prompts. Recent studies report that prompt-based direct classification eliminates the need for finetuning but lacks data and inference scalability. This paper proposes a novel data augmentation technique that leverages large-scale language models to generate realistic text samples from a mixture of real samples. We also propose utilizing soft-labels predicted by the language models, effectively distilling knowledge from the large-scale language models and creating textual perturbations simultaneously. We perform data augmentation experiments on diverse classification tasks and show that our method hugely outperforms existing text augmentation methods. We also conduct experiments on our newly proposed benchmark to show that the augmentation effect is not only attributed to memorization. Further ablation studies and a qualitative analysis provide more insights into our approach.

Cite

CITATION STYLE

APA

Yoo, K. M., Park, D., Kang, J., Lee, S. W., & Park, W. (2021). GPT3Mix: Leveraging Large-scale Language Models for Text Augmentation. In Findings of the Association for Computational Linguistics, Findings of ACL: EMNLP 2021 (pp. 2225–2239). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2021.findings-emnlp.192

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free