Towards Informative Few-Shot Prompt with Maximum Information Gain for In-Context Learning

2Citations
Citations of this article
16Readers
Mendeley users who have this article in their library.

Abstract

Large Language models (LLMs) possess the capability to engage In-context Learning (ICL) by leveraging a few demonstrations pertaining to a new downstream task as conditions. However, this particular learning paradigm suffers from high instability stemming from substantial variances induced by factors such as the input distribution of selected examples, their ordering, and prompt formats. In this work, we demonstrate that even when all these factors are held constant, the random selection of examples still results in high variance. Consequently, we aim to explore the informative ability of data examples by quantifying the Information Gain (IG) obtained in prediction after observing a given example candidate. Then we propose to sample those with maximum IG. Additionally, we identify the presence of template bias, which can lead to unfair evaluations of IG during the sampling process. To mitigate this bias, we introduce Calibration Before Sampling strategy. The experimental results illustrate that our proposed method can yield an average relative improvement of 14.3% across six classification tasks using three LLMs.

Cite

CITATION STYLE

APA

Liu, H., & Wang, Y. (2023). Towards Informative Few-Shot Prompt with Maximum Information Gain for In-Context Learning. In Findings of the Association for Computational Linguistics: EMNLP 2023 (pp. 15825–15838). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2023.findings-emnlp.1060

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free