Optimizing the LINPACK Algorithm for Large-Scale PCIe-Based CPU-GPU Heterogeneous Systems

25Citations
Citations of this article
5Readers
Mendeley users who have this article in their library.
Get full text

Abstract

There is a widening gap between GPU and other components (CPU, PCIe bus and communication network) in heterogeneous parallel system. The gap forces us to orchestrate cooperative execution among these components much more carefully than ever before. By taking the LINPACK benchmark as a case study, this article proposes a fine-grained pipelining algorithm on large-scale CPU-GPU heterogeneous cluster systems. First, we build an algorithmic model that reveals a new approach to GPU-centric and fine-grained pipelining algorithm design. Then, we present four model-driven pipelining algorithms that incrementally squeeze bubbles in the pipeline so that it is occupied by more useful floating-point calculations. The algorithms are implemented on both the AMD and NVIDIA GPU platforms. The finally optimized LINPACK program achieves 107 PFlops on 25, 600 GPUs (70 percent floating-point efficiency). Several insights have been drawn to suggest tradeoff of algorithm design, programming support, and architecture design.

Cite

CITATION STYLE

APA

Tan, G., Shui, C., Wang, Y., Yu, X., & Yan, Y. (2021). Optimizing the LINPACK Algorithm for Large-Scale PCIe-Based CPU-GPU Heterogeneous Systems. IEEE Transactions on Parallel and Distributed Systems, 32(9), 2367–2380. https://doi.org/10.1109/TPDS.2021.3067731

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free