This paper provides an overview of calibration methods for supervised classification learners. Calibration means a scaling of classifier scores into the probability space. Such a probabilistic classifier output is especially useful if the classification output is used for post-processing. The calibraters are compared by using 10-fold cross-validation according to their performance on SVM and CART outputs for four different two-class data sets.
CITATION STYLE
Gebel, M., & Weihs, C. (2007). Calibrating classifier scores into probabilities. In Studies in Classification, Data Analysis, and Knowledge Organization (pp. 141–148). Kluwer Academic Publishers. https://doi.org/10.1007/978-3-540-70981-7_17
Mendeley helps you to discover research relevant for your work.