Assessing the reliability of a multi-class classifier

0Citations
Citations of this article
1Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Multi-class learning requires a classifier to discriminate among a large set of L classes in order to define a classification rule able to identify the correct class for new observations. The resulting classification rule could not always be robust, particularly when imbalanced classes are observed or the data size is not large. In this paper a new approach is presented aimed at evaluating the reliability of a classification rule. It uses a standard classifier but it evaluates the reliability of the obtained classification rule by re-training the classifier on resampled versions of the original data. User-defined misclassification costs are assigned to the obtained confusion matrices and then used as inputs in a Beta regression model which provides a cost-sensitive weighted classification index. The latter is used jointly with another index measuring dissimilarity in distribution between observed classes and predicted ones. Both indices are defined in [0, 1] so that their values can be graphically represented in a [0,1]2space. The visual inspection of the points for each classifier allows us to evaluate its reliability on the basis of the relationship between the values of both indices obtained on the original data and on resampled versions of it.

Cite

CITATION STYLE

APA

Frigau, L., Conversano, C., & Mola, F. (2016). Assessing the reliability of a multi-class classifier. In Studies in Classification, Data Analysis, and Knowledge Organization (pp. 207–217). Kluwer Academic Publishers. https://doi.org/10.1007/978-3-319-25226-1_18

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free