Scaling up Graph Neural Networks Via Graph Coarsening

48Citations
Citations of this article
73Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Scalability of graph neural networks remains one of the major challenges in graph machine learning. Since the representation of a node is computed by recursively aggregating and transforming representation vectors of its neighboring nodes from previous layers, the receptive fields grow exponentially, which makes standard stochastic optimization techniques ineffective. Various approaches have been proposed to alleviate this issue, e.g., sampling-based methods and techniques based on pre-computation of graph filters. In this paper, we take a different approach and propose to use graph coarsening for scalable training of GNNs, which is generic, extremely simple and has sublinear memory and time costs during training. We present extensive theoretical analysis on the effect of using coarsening operations and provides useful guidance on the choice of coarsening methods. Interestingly, our theoretical analysis shows that coarsening can also be considered as a type of regularization and may improve the generalization. Finally, empirical results on real world datasets show that, simply applying off-the-shelf coarsening methods, we can reduce the number of nodes by up to a factor of ten without causing a noticeable downgrade in classification accuracy.

Cite

CITATION STYLE

APA

Huang, Z., Zhang, S., Xi, C., Liu, T., & Zhou, M. (2021). Scaling up Graph Neural Networks Via Graph Coarsening. In Proceedings of the ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (pp. 675–684). Association for Computing Machinery. https://doi.org/10.1145/3447548.3467256

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free