AdaGCL: Adaptive Subgraph Contrastive Learning to Generalize Large-scale Graph Training

7Citations
Citations of this article
7Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Training graph neural networks (GNNs) with good generalizability on large-scale graphs is a challenging problem. Existing methods mainly divide the input graph into multiple subgraphs and train them in different batches to improve training scalability. However, the local batches obtained by such a strategy could contain topological bias compared with the complete graph structure. It has been studied that the topological bias results in more significant gaps between training and testing performances, or worse generalization robustness. A straightforward solution is to utilize contrastive learning, and train node embeddings to be robust and invariant among the augmented imperfect graphs. However, most of the existing work are inefficient by contrasting extensive node pairs at the large-scale graph. With random data augmentation, they may deteriorate the embedding process by transforming well-sampled batches into meaningless graph structures. To bridge the gap between large-scale graph training and contrastive learning, we propose adaptive subgraph contrastive learning (AdaGCL). Given a batch of sampled subgraphs, we propose subgraph-granularity contrastive loss to compare the anchor node with a limited number of subgraphs, which reduces the computation cost. AdaGCL tailors two key components for batch training: (1) Batch-aware view generation to keep the intrinsic individual subgraph structures of batch to learn the informative node embeddings; (2) Batch-aware pair sampling to construct the positive and negative contrasting subgraphs based on anchor node label. Experiments show that AdaGCL can scale up to graphs with millions of nodes, and delivers the consistent improvement than the existing methods on various benchmark datasets. Furthermore, AdaGCL has comparable running time with the state-of-the-art contrastive learning methods that focus on improving efficiency. Finally, ablation studies of the two components of AdaGCL demonstrate their effectiveness to generalize the batch training. The code is in: https://github.com/YL-wang/CIKM_AdaGCL/.

Cite

CITATION STYLE

APA

Wang, Y., Zhou, K., Miao, R., Liu, N., & Wang, X. (2022). AdaGCL: Adaptive Subgraph Contrastive Learning to Generalize Large-scale Graph Training. In International Conference on Information and Knowledge Management, Proceedings (pp. 2047–2056). Association for Computing Machinery. https://doi.org/10.1145/3511808.3557228

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free