An Evaluative Measure of Clustering Methods Incorporating Hyperparameter Sensitivity

4Citations
Citations of this article
15Readers
Mendeley users who have this article in their library.

Abstract

Clustering algorithms are often evaluated using metrics which compare with ground-truth cluster assignments, such as Rand index and NMI. Algorithm performance may vary widely for different hyperparameters, however, and thus model selection based on optimal performance for these metrics is discordant with how these algorithms are applied in practice, where labels are unavailable and tuning is often more art than science. It is therefore desirable to compare clustering algorithms not only on their optimally tuned performance, but also some notion of how realistic it would be to obtain this performance in practice. We propose an evaluation of clustering methods capturing this ease-of-tuning by modeling the expected best clustering score under a given computation budget. To encourage the adoption of the proposed metric alongside classic clustering evaluations, we provide an extensible benchmarking framework. We perform an extensive empirical evaluation of our proposed metric on popular clustering algorithms over a large collection of datasets from different domains, and observe that our new metric leads to several noteworthy observations.

Cite

CITATION STYLE

APA

Mishra, S., Monath, N., Boratko, M., Kobren, A., & McCallum, A. (2022). An Evaluative Measure of Clustering Methods Incorporating Hyperparameter Sensitivity. In Proceedings of the 36th AAAI Conference on Artificial Intelligence, AAAI 2022 (Vol. 36, pp. 7788–7796). Association for the Advancement of Artificial Intelligence. https://doi.org/10.1609/aaai.v36i7.20747

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free