Calibrating classifier scores into probabilities

9Citations
Citations of this article
23Readers
Mendeley users who have this article in their library.
Get full text

Abstract

This paper provides an overview of calibration methods for supervised classification learners. Calibration means a scaling of classifier scores into the probability space. Such a probabilistic classifier output is especially useful if the classification output is used for post-processing. The calibraters are compared by using 10-fold cross-validation according to their performance on SVM and CART outputs for four different two-class data sets.

Cite

CITATION STYLE

APA

Gebel, M., & Weihs, C. (2007). Calibrating classifier scores into probabilities. In Studies in Classification, Data Analysis, and Knowledge Organization (pp. 141–148). Kluwer Academic Publishers. https://doi.org/10.1007/978-3-540-70981-7_17

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free