Data from many real-world applications can be naturally represented by multi-view networks where the different views encode different types of relationships (e.g., friendship, shared interests in music, etc.) between real-world individuals or entities. There is an urgent need for methods to obtain low-dimensional, information preserving and typically nonlinear embeddings of such multi-view networks. However, most of the work on multi-view learning focuses on data that lack a network structure, and most of the work on network embeddings has focused primarily on single-view networks. Against this background, we consider the multi-view network representation learning problem, i.e., the problem of constructing low-dimensional information preserving embeddings of multi-view networks. Specifically, we investigate a novel Generative Adversarial Network (GAN) framework for Multi-View Network Embedding, namely MEGAN, aimed at preserving the information from the individual network views, while accounting for connectivity across (and hence complementarity of and correlations between) different views. The results of our experiments on two real-world multi-view data sets show that the embeddings obtained using MEGAN outperform the state-of-the-art methods on node classification, link prediction and visualization tasks.
CITATION STYLE
Sun, Y., Wang, S., Hsieh, T. Y., Tang, X., & Honavar, V. (2019). Megan: A generative adversarial network for multi-view network embedding. In IJCAI International Joint Conference on Artificial Intelligence (Vol. 2019-August, pp. 3527–3533). International Joint Conferences on Artificial Intelligence. https://doi.org/10.24963/ijcai.2019/489
Mendeley helps you to discover research relevant for your work.