Multimodal Clustering via Deep Commonness and Uniqueness Mining

9Citations
Citations of this article
13Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Deep multimodal clustering have shown their competitiveness among different multimodal clustering algorithms. Existing algorithms usually boost the multimodal clustering by exploring the common knowledge among multiple modalities, which underutilizes the uniqueness of multiple modalities. In this paper, we enhance the mining of modality-common knowledge by extracting the modality-unique knowledge of each modality simultaneously. Specifically, we first utilize autoencoders to extract the modality-common and modality-unique features of each modality respectively. Meanwhile, the cross reconstruction is used to build latent connections among different modalities, i.e., maintain the consistency of modality-common features of each modality as well as heightening the diversity of modality-unique features of each modality. After that, modality-common features are fused to cluster the multimodal data. Experimental results on several benchmark datasets demonstrate that the proposed method outperforms state-of-art works obviously.

Cite

CITATION STYLE

APA

Zong, L., Miao, F., Zhang, X., & Xu, B. (2020). Multimodal Clustering via Deep Commonness and Uniqueness Mining. In International Conference on Information and Knowledge Management, Proceedings (pp. 2357–2360). Association for Computing Machinery. https://doi.org/10.1145/3340531.3412103

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free