Is it true that everybody knows how to compare classifiers in terms of reliability? Probably not, since it is so common that just after reading a paper we feel that the classifiers’ performance analysis is not exhaustive and we would like to see more information or more trustworthy information. The goal of this paper is to propose a method of multi-classifier comparison on several benchmark data sets. The proposed method is trustworthy, deeper, and more informative (multi-aspect). Thanks to this method, we can see much more than overall performance. Today, we need methods which not only answer the question whether a given method is the best, because it almost never is. Apart from the general strength assessment of a learning machine we need to know when (and whether) its performance is outstanding or whether its performance is unique.
CITATION STYLE
Jankowski, N. (2016). The multi-ranked classifiers comparison. In Advances in Intelligent Systems and Computing (Vol. 403, pp. 111–123). Springer Verlag. https://doi.org/10.1007/978-3-319-26227-7_11
Mendeley helps you to discover research relevant for your work.