Knowing More About Questions Can Help: Improving Calibration in Question Answering

26Citations
Citations of this article
75Readers
Mendeley users who have this article in their library.

Abstract

We study calibration in question answering, estimating whether model correctly predicts answer for each question. Unlike prior work which mainly rely on the model's confidence score, our calibrator incorporates information about the input example (e.g., question and the evidence context). Together with data augmentation via back translation, our simple approach achieves 5-10% gains in calibration accuracy on reading comprehension benchmarks. Furthermore, we present the first calibration study in the open retrieval setting, comparing the calibration accuracy of retrieval-based span prediction models and answer generation models. Here again, our approach shows consistent gains over calibrators relying on the model confidence. Our simple and efficient calibrator can be easily adapted to many tasks and model architectures, showing robust gains in all settings.

Cite

CITATION STYLE

APA

Zhang, S., Gong, C., & Choi, E. (2021). Knowing More About Questions Can Help: Improving Calibration in Question Answering. In Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021 (pp. 1958–1970). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2021.findings-acl.172

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free