Associative Deep Clustering: Training a Classification Network with No Labels

22Citations
Citations of this article
93Readers
Mendeley users who have this article in their library.
Get full text

Abstract

We propose a novel end-to-end clustering training schedule for neural networks that is direct, i.e. the output is a probability distribution over cluster memberships. A neural network maps images to embeddings. We introduce centroid variables that have the same shape as image embeddings. These variables are jointly optimized with the network’s parameters. This is achieved by a cost function that associates the centroid variables with embeddings of input images. Finally, an additional layer maps embeddings to logits, allowing for the direct estimation of the respective cluster membership. Unlike other methods, this does not require any additional classifier to be trained on the embeddings in a separate step. The proposed approach achieves state-of-the-art results in unsupervised classification and we provide an extensive ablation study to demonstrate its capabilities.

Cite

CITATION STYLE

APA

Haeusser, P., Plapp, J., Golkov, V., Aljalbout, E., & Cremers, D. (2019). Associative Deep Clustering: Training a Classification Network with No Labels. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 11269 LNCS, pp. 18–32). Springer Verlag. https://doi.org/10.1007/978-3-030-12939-2_2

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free