The multi-ranked classifiers comparison

0Citations
Citations of this article
2Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Is it true that everybody knows how to compare classifiers in terms of reliability? Probably not, since it is so common that just after reading a paper we feel that the classifiers’ performance analysis is not exhaustive and we would like to see more information or more trustworthy information. The goal of this paper is to propose a method of multi-classifier comparison on several benchmark data sets. The proposed method is trustworthy, deeper, and more informative (multi-aspect). Thanks to this method, we can see much more than overall performance. Today, we need methods which not only answer the question whether a given method is the best, because it almost never is. Apart from the general strength assessment of a learning machine we need to know when (and whether) its performance is outstanding or whether its performance is unique.

Cite

CITATION STYLE

APA

Jankowski, N. (2016). The multi-ranked classifiers comparison. In Advances in Intelligent Systems and Computing (Vol. 403, pp. 111–123). Springer Verlag. https://doi.org/10.1007/978-3-319-26227-7_11

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free