Multi-View Information-Bottleneck Representation Learning

59Citations
Citations of this article
36Readers
Mendeley users who have this article in their library.

Abstract

In real-world applications, clustering or classification can usually be improved by fusing information from different views. Therefore, unsupervised representation learning on multi-view data becomes a compelling topic in machine learning. In this paper, we propose a novel and flexible unsupervised multi-view representation learning model termed Collaborative Multi-View Information Bottleneck Networks (CMIB-Nets), which comprehensively explores the common latent structure and the view-specific intrinsic information, and discards the superfluous information in the data significantly improving the generalization capability of the model. Specifically, our proposed model relies on the information bottleneck principle to integrate the shared representation among different views and the view-specific representation of each view, prompting the multi-view complete representation and flexibly balancing the complementarity and consistency among multiple views. We conduct extensive experiments (including clustering analysis, robustness experiment, and ablation study) on real-world datasets, which empirically show promising generalization ability and robustness compared to state-of-the-arts.

Cite

CITATION STYLE

APA

Wan, Z., Zhang, C., Zhu, P., & Hu, Q. (2021). Multi-View Information-Bottleneck Representation Learning. In 35th AAAI Conference on Artificial Intelligence, AAAI 2021 (Vol. 11B, pp. 10085–10092). Association for the Advancement of Artificial Intelligence. https://doi.org/10.1609/aaai.v35i11.17210

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free