Multimodal Graph Meta Contrastive Learning

9Citations
Citations of this article
17Readers
Mendeley users who have this article in their library.
Get full text

Abstract

In recent years, graph contrastive learning has achieved promising node classification accuracy using graph neural networks (GNNs), which can learn representations in an unsupervised manner. However, such representations cannot be generalized to unseen novel classes with only few-shot labeled samples in spite of exhibiting good performance on seen classes. In order to assign generalization capability to graph contrastive learning, we propose multimodal graph meta contrastive learning (MGMC) in this paper, which integrates multimodal meta learning into graph contrastive learning. On one hand, MGMC accomplishes effectively fast adapation on unseen novel classes by the aid of bilevel meta optimization to solve few-shot problems. On the other hand, MGMC can generalize quickly to a generic dataset with multimodal distribution by inducing the FiLM-based modulation module. In addition, MGMC incorporates the lastest graph contrastive learning method that does not rely on the onstruction of augmentations and negative examples. To our best knowledge, this is the first work to investigate graph contrastive learning for few-shot problems. Extensieve experimental results on three graph-structure datasets demonstrate the effectiveness of our proposed MGMC in few-shot node classification tasks.

Cite

CITATION STYLE

APA

Zhao, F., & Wang, D. (2021). Multimodal Graph Meta Contrastive Learning. In International Conference on Information and Knowledge Management, Proceedings (pp. 3657–3661). Association for Computing Machinery. https://doi.org/10.1145/3459637.3482151

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free