MG-GCN: A Scalable multi-GPU GCN Training Framework

2Citations
Citations of this article
11Readers
Mendeley users who have this article in their library.

Abstract

Full batch training of Graph Convolutional Network (GCN) models is not feasible on a single GPU for large graphs containing tens of millions of vertices or more. Recent work has shown that, for the graphs used in the machine learning community, communication becomes a bottleneck, and scaling is blocked outside of the single machine regime. Thus, we propose MG-GCN, a multi-GPU GCN training framework taking advantage of the high-speed communication links between the GPUs present in multi-GPU systems. MG-GCN employs multiple High-Performance Computing optimizations, including efficient re-use of memory buffers to reduce the memory footprint of training GNN models, as well as communication and computation overlap. These optimizations enable execution on larger datasets, that generally do not fit into the memory of a single GPU in state-of-the-art implementations. Furthermore, they contribute to achieving superior speedup compared to the state-of-the-art. For example, MG-GCN achieves super-linear speedup with respect to DGL, on the Reddit graph on both DGX-1 (V100) and DGX-A100.

Cite

CITATION STYLE

APA

Balin, M. F., Sancak, K., & Catalyurek, U. V. (2022). MG-GCN: A Scalable multi-GPU GCN Training Framework. In ACM International Conference Proceeding Series. Association for Computing Machinery. https://doi.org/10.1145/3545008.3545082

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free