Stop Measuring Calibration When Humans Disagree

25Citations
Citations of this article
40Readers
Mendeley users who have this article in their library.

Abstract

Calibration is a popular framework to evaluate whether a classifier knows when it does not know-i.e., its predictive probabilities are a good indication of how likely a prediction is to be correct. Correctness is commonly estimated against the human majority class. Recently, calibration to human majority has been measured on tasks where humans inherently disagree about which class applies. We show that measuring calibration to human majority given inherent disagreements is theoretically problematic, demonstrate this empirically on the ChaosNLI dataset, and derive several instance-level measures of calibration that capture key statistical properties of human judgements-class frequency, ranking and entropy.

Cite

CITATION STYLE

APA

Baan, J., Aziz, W., Plank, B., & Fernández, R. (2022). Stop Measuring Calibration When Humans Disagree. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, EMNLP 2022 (pp. 1892–1915). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2022.emnlp-main.124

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free