MEAL: Stable and Active Learning for Few-Shot Prompting

3Citations
Citations of this article
17Readers
Mendeley users who have this article in their library.

Abstract

Few-shot classification has made great strides due to foundation models that, through priming and prompting, are highly effective few-shot learners. However, this approach has high variance both across different sets of few shots (data selection) and across different finetuning runs (run variability). This is problematic not only because it impedes the fair comparison of different approaches, but especially because it makes few-shot learning too unreliable for many real-world applications. To alleviate these issues, we make two contributions for more stable and effective few-shot learning: First, we propose novel ensembling methods and show that they substantially reduce run variability. Second, we introduce a new active learning (AL) criterion for data selection and present the first AL-based approach specifically tailored towards prompt-based learning. In our experiments, we show that our combined method, MEAL (Multiprompt finetuning and prediction Ensembling with Active Learning), improves overall performance of prompt-based finetuning by 2.3 points on five diverse tasks. We publicly share our code and data splits in https://github.com/akoksal/MEAL.

Cite

CITATION STYLE

APA

Köksal, A., Schick, T., & Schütze, H. (2023). MEAL: Stable and Active Learning for Few-Shot Prompting. In Findings of the Association for Computational Linguistics: EMNLP 2023 (pp. 506–517). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2023.findings-emnlp.36

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free