Measuring Pointwise V-Usable Information In-Context-ly

1Citations
Citations of this article
8Readers
Mendeley users who have this article in their library.

Abstract

In-context learning (ICL) is a new learning paradigm that has gained popularity along with the development of large language models. In this work, we adapt a recently proposed hardness metric, pointwise V-usable information (PVI), to an in-context version (in-context PVI). Compared to the original PVI, in-context PVI is more efficient in that it requires only a few exemplars and does not require fine-tuning. We conducted a comprehensive empirical analysis to evaluate the reliability of in-context PVI. Our findings indicate that in-context PVI estimates exhibit similar characteristics to the original PVI. Specific to the in-context setting, we show that in-context PVI estimates remain consistent across different exemplar selections and numbers of shots. The variance of in-context PVI estimates across different exemplar selections is insignificant, which suggests that in-context estimates PVI are stable. Furthermore, we demonstrate how in-context PVI can be employed to identify challenging instances. Our work highlights the potential of in-context PVI and provides new insights into the capabilities of ICL.

Cite

CITATION STYLE

APA

Lu, S., Chen, S., Li, Y., Bitterman, D., Savova, G., & Gurevych, I. (2023). Measuring Pointwise V-Usable Information In-Context-ly. In Findings of the Association for Computational Linguistics: EMNLP 2023 (pp. 15739–15756). Association for Computational Linguistics (ACL).

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free