A comparison study of cost-sensitive classifier evaluations

4Citations
Citations of this article
4Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Performance evaluation plays an important role in the rule induction and classification process. Classic evaluation measures have been extensively studied in the past. In recent years, cost-sensitive classification has received much attention. In a typical classification task, all types of classification errors are treated equally. In many practical cases, not all errors are equal. Therefore, it is critical to build a cost-sensitive classifier to minimize the expected cost. This also brings us to another important issue, namely, cost-sensitive classifier evaluations. The main objective is to investigate different aspects of this problem. We review five existing cost-sensitive evaluation measures and compare their similarities and differences. We find that the cost-sensitive measures provide consistent evaluation results comparing to classic evaluation measures in most cases. However, when applying different cost values to the evaluation, the differences between the performances of each algorithm change. It is reasonable to conclude that the evaluation results could change dramatically when certain cost values applied. Moreover, by using cost curves to visualize the classification results, performance and performance differences of different classifiers can be easily seen. © 2012 Springer-Verlag.

Cite

CITATION STYLE

APA

Zhou, B., & Liu, Q. (2012). A comparison study of cost-sensitive classifier evaluations. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 7670 LNAI, pp. 360–371). https://doi.org/10.1007/978-3-642-35139-6_34

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free