Adaptation with Self-Evaluation to Improve Selective Prediction in LLMs

1Citations
Citations of this article
20Readers
Mendeley users who have this article in their library.

Abstract

Large language models (LLMs) have recently shown great advances in a variety of tasks, including natural language understanding and generation. However, their use in high-stakes decision-making scenarios is still limited due to the potential for errors. Selective prediction is a technique that can be used to improve the reliability of the LLMs by allowing them to abstain from making predictions when they are unsure of the answer. In this work, we propose a novel framework for adaptation with self-evaluation to improve the selective prediction performance of LLMs. Our framework is based on the idea of using parameter-efficient tuning to adapt the LLM to the specific task at hand while improving its ability to perform self-evaluation. We evaluate our method on a variety of question-answering (QA) datasets and show that it outperforms state-of-the-art selective prediction methods. For example, on the CoQA benchmark, our method improves the AUACC from 91.23% to 92.63% and improves the AUROC from 74.61% to 80.25%.

Cite

CITATION STYLE

APA

Chen, J., Yoon, J., Ebrahimi, S., Arık, S., Pfister, T., & Jha, S. (2023). Adaptation with Self-Evaluation to Improve Selective Prediction in LLMs. In Findings of the Association for Computational Linguistics: EMNLP 2023 (pp. 5190–5213). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2023.findings-emnlp.345

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free