Properties and benefits of calibrated classifiers

61Citations
Citations of this article
78Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

A calibrated classifier provides reliable estimates of the true probability that each test sample is a member of the class of interest. This is crucial in decision making tasks. Procedures for calibration have already been studied in weather forecasting, game theory, and more recently in machine learning, with the latter showing empirically that calibration of classifiers helps not only in decision making, but also improves classification accuracy. In this paper we extend the theoretical foundation of these empirical observations. We prove that (1) a well calibrated classifier provides bounds on the Bayes error (2) calibrating a classifier is guaranteed not to decrease classification accuracy, and (3) the procedure of calibration provides the threshold or thresholds on the decision rule that minimize the classification error. We also draw the parallels and differences between methods that use receiver operating characteristic (ROC) curves and calibration based procedures that are aimed at finding a threshold of minimum error. In particular, calibration leads to improved performance when multiple thresholds exist. © Springer-Verlag Berlin Heidelberg 2004.

Cite

CITATION STYLE

APA

Cohen, I., & Goldszmidt, M. (2004). Properties and benefits of calibrated classifiers. Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 3202, 125–136. https://doi.org/10.1007/978-3-540-30116-5_14

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free