Multi-class and cluster evaluation measures based on rényi and tsallis entropies and mutual information

1Citations
Citations of this article
1Readers
Mendeley users who have this article in their library.
Get full text

Abstract

The evaluation of cluster and classification models in comparison to ground truth information or other models is still an objective for many applications. Frequently, this leads to controversy debates regarding the informative content. This particularly holds for cluster evaluations. Yet, for imbalanced class cardinalities, similar problems occur. One possibility to handle evaluation tasks in a more natural way is to consider comparisons in terms of shared or non-shared information. Information theoretic quantities like mutual information and divergence are designed to answer respective questions. Besides formulations based on the most prominent Shannon-entropy, alternative definitions based on relaxed entropy definitions are known. Examples are Rényi- and Tsallis-entropies. Obviously, the use of those entropy concepts result in an readjustment of mutual information etc. and respective evaluation measures thereof. In the present paper we consider several information theoretic evaluation measures based on different entropy concepts and compare them theoretically as well as regarding their performance in applications.

Cite

CITATION STYLE

APA

Villmann, T., & Geweniger, T. (2018). Multi-class and cluster evaluation measures based on rényi and tsallis entropies and mutual information. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 10841 LNAI, pp. 736–749). Springer Verlag. https://doi.org/10.1007/978-3-319-91253-0_68

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free