Abstract
Classifier calibration does not always go hand in hand with the classifier's ability to separate the classes. There are applications where good classifier calibration, i.e., the ability to produce accurate probability estimates, is more important than class separation. When the amount of data for training is limited, the traditional approach to improve calibration starts to crumble. In this article, we show how generating more data for calibration is able to improve calibration algorithm performance in many cases where a classifier is not naturally producing well-calibrated outputs and the traditional approach fails. The proposed approach adds computational cost but considering that the main use case is with small datasets this extra computational cost stays insignificant and is comparable to other methods in prediction time. From the tested classifiers, the largest improvement was detected with the random forest and naive Bayes classifiers. Therefore, the proposed approach can be recommended at least for those classifiers when the amount of data available for training is limited and good calibration is essential.
Author supplied keywords
Cite
CITATION STYLE
Tuomo, A., Suutala, J., Röning, J., & Koskimäki, H. (2020). Better classifier calibration for small datasets. ACM Transactions on Knowledge Discovery from Data, 14(3). https://doi.org/10.1145/3385656
Register to see more suggestions
Mendeley helps you to discover research relevant for your work.