Multi-view data is common in real-world datasets, where different views describe distinct perspectives. To better summarize the consistent and complementary information in multi-view data, researchers have proposed various multi-view representation learning algorithms, typically based on factorization models. However, most previous methods were focused on shallow factorization models which cannot capture the complex hierarchical information. Although a deep multiview factorization model has been proposed recently, it fails to explicitly discern consistent and complementary information in multi-view data and does not consider conceptual labels. In this work we present a semi-supervised deep multi-view factorization method, named Deep Multi-view Concept Learning (DMCL). DMCL performs nonnegative factorization of the data hierarchically, and tries to capture semantic structures and explicitly model consistent and complementary information in multi-view data at the highest abstraction level. We develop a block coordinate descent algorithm for DMCL. Experiments conducted on image and document datasets show that DMCL performs well and outperforms baseline methods.
CITATION STYLE
Xu, C., Guan, Z., Zhao, W., Niu, Y., Wang, Q., & Wang, Z. (2018). Deep multi-view concept learning. In IJCAI International Joint Conference on Artificial Intelligence (Vol. 2018-July, pp. 2898–2904). International Joint Conferences on Artificial Intelligence. https://doi.org/10.24963/ijcai.2018/402
Mendeley helps you to discover research relevant for your work.