PROGEN: Progressive Zero-shot Dataset Generation via In-context Feedback

21Citations
Citations of this article
33Readers
Mendeley users who have this article in their library.

Abstract

Recently, dataset-generation-based zero-shot learning has shown promising results by training a task-specific model with a dataset synthesized from large pre-trained language models (PLMs). The final task-specific model often achieves compatible or even better performance than PLMs under the zero-shot setting, with orders of magnitude fewer parameters. However, synthetic datasets have their drawbacks. They have long been suffering from low-quality issues (e.g., low informativeness and redundancy). This explains why the massive synthetic data does not lead to better performance - a scenario we would expect in the human-labeled data. To improve the quality of dataset synthesis, we propose a progressive zero-shot dataset generation framework, PROGEN, which leverages the feedback from the task-specific model to guide the generation of new training data via in-context examples. Extensive experiments on five text classification datasets demonstrate the effectiveness of the proposed approach. We also show PROGEN achieves on-par or superior performance with only 1% synthetic dataset size compared to baseline methods without in-context feedback.

Cite

CITATION STYLE

APA

Ye, J., Gao, J., Feng, J., Wu, Z., Yu, T., & Kong, L. (2022). PROGEN: Progressive Zero-shot Dataset Generation via In-context Feedback. In Findings of the Association for Computational Linguistics: EMNLP 2022 (pp. 3671–3683). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2022.findings-emnlp.269

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free