A coherent interpretation of AUC as a measure of aggregated classification performance

207Citations
Citations of this article
218Readers
Mendeley users who have this article in their library.

Abstract

The area under the ROC curve (AUC), a well-known measure of ranking performance, is also often used as a measure of classification performance, aggregating over decision thresholds as well as class and cost skews. However, David Hand has recently argued that AUC is fundamentally incoherent as a measure of aggregated classifier performance and proposed an alternative measure (Hand, 2009). Specifically, Hand derives a linear relationship between AUC and expected minimum loss, where the expectation is taken over a distribution of the misclassification cost parameter that depends on the model under consideration. Replacing this distribution with a Beta(2,2) distribution, Hand derives his alternative measure H. In this paper we offer an alternative, coherent interpretation of AUC as linearly related to expected loss. We use a distribution over cost parameter and a distribution over data points, both uniform and hence model-independent. Should one wish to consider only optimal thresholds, we demonstrate that a simple and more intuitive alternative to Hand's H measure is already available in the form of the area under the cost curve. Copyright 2011 by the author(s)/owner(s).

Cite

CITATION STYLE

APA

Flach, P., Hernández-Orallo, J., & Ferri, C. (2011). A coherent interpretation of AUC as a measure of aggregated classification performance. In Proceedings of the 28th International Conference on Machine Learning, ICML 2011 (pp. 657–664).

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free