Considering the classification problem in which class priors or misallocation costs are not known precisely, receiver operator characteristic (ROC) analysis has become a standard tool in pattern recognition for obtaining integrated performance measures to cope with the uncertainty. Similarly, in situations in which priors may vary in application, the ROC can be used to inspect performance over the expected range of variation. In this paper we argue that even though measures such as the area under the ROC (AUC) are useful in obtaining an integrated performance measure independent of the priors, it may also be important to incorporate the sensitivity across the expected prior-range. We show that a classifier may result in a good AUC score, but a poor (large) prior sensitivity, which may be undesirable. A methodology is proposed that combines both accuracy and sensitivity, providing a new model selection criterion that is relevant to certain problems. Experiments show that incorporating sensitivity is very important in some realistic scenarios, leading to better model selection in some cases. © Springer-Verlag Berlin Heidelberg 2006.
CITATION STYLE
Landgrebe, T., & Duin, R. P. W. (2006). Combining accuracy and prior sensitivity for classifier design under prior uncertainty. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 4109 LNCS, pp. 512–521). Springer Verlag. https://doi.org/10.1007/11815921_56
Mendeley helps you to discover research relevant for your work.