Finding Support Examples for In-Context Learning

15Citations
Citations of this article
34Readers
Mendeley users who have this article in their library.

Abstract

In-context learning is a new learning paradigm where a language model observes a few examples and then directly outputs the test input's prediction. Previous works have shown that it is sensitive to the provided examples and randomly sampled examples probably cause inferior performance. In this paper, we propose finding “support examples” for in-context learning: Given a training dataset, it aims to select one permutation of a few examples, which can well characterize the task for in-context learning and thus lead to superior performance. Although for traditional gradient-based training, there are extensive methods to find a coreset from the entire dataset, they struggle to identify important in-context examples, because in-context learning occurs in the language model's forward process without gradients or parameter updates and thus has a significant discrepancy with traditional training. Additionally, the strong dependency among in-context examples makes it an NP-hard combinatorial optimization problem and enumerating all permutations is infeasible. Hence we propose LENS, a fiLter-thEN-Search method to tackle this challenge in two stages: First we filter the dataset to obtain informative in-context examples individually. Specifically, we propose a novel metric, InfoScore, to evaluate the example's in-context informativeness based on the language model's feedback, and further propose a progressive filtering process to filter out uninformative examples. Then we propose diversity-guided example search which iteratively refines and evaluates the selected example permutations, to find examples that fully depict the task. The experimental results show that LENS significantly outperforms a wide range of baselines.

Cite

CITATION STYLE

APA

Li, X., & Qiu, X. (2023). Finding Support Examples for In-Context Learning. In Findings of the Association for Computational Linguistics: EMNLP 2023 (pp. 6219–6235). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2023.findings-emnlp.411

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free