Abstract
Prompting is a widely adopted technique for fine-tuning large language models. Recent research by Scao and Rush (2021) has demonstrated its effectiveness in improving few-shot learning performance compared to vanilla finetuning and also showed that prompting and vanilla fine tuning achieves similar performance in high data regime (∼> 2000 samples). This paper investigates the impact of imbalanced data distribution on prompting. Through rigorous experimentation on diverse datasets and models, our findings reveals that even in scenarios with high data regimes, prompting consistently outperforms vanilla fine-tuning by exhibiting average performance improvement of 2.5%.
Cite
CITATION STYLE
Mohta, J. (2023). Prompting language models improves performance in imbalanced setting. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (pp. 201–211). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2023.sustainlp-1.14
Register to see more suggestions
Mendeley helps you to discover research relevant for your work.