Prague: High-performance heterogeneity-aware asynchronous decentralized training

60Citations
Citations of this article
106Readers
Mendeley users who have this article in their library.

Abstract

Distributed deep learning training usually adopts All-Reduce as the synchronization mechanism for data parallel algorithms due to its high performance in homogeneous environment. However, its performance is bounded by the slowest worker among all workers. For this reason, it is significantly slower in heterogeneous settings. AD-PSGD, a newly proposed synchronization method which provides numerically fast convergence and heterogeneity tolerance, suffers from deadlock issues and high synchronization overhead. Is it possible to get the best of both worlds - designing a distributed training method that has both high performance like All-Reduce in homogeneous environment and good heterogeneity tolerance like AD-PSGD? In this paper, we propose Prague, a high-performance heterogeneity-aware asynchronous decentralized training approach. We achieve the above goal with intensive synchronization optimization by exploring the interplay between algorithm and system implementation, or statistical and hardware efficiency. To reduce synchronization cost, we propose a novel communication primitive, Partial All-Reduce, that enables fast synchronization among a group of workers. To reduce serialization cost, we propose static group scheduling in homogeneous environment and simple techniques, i.e., Group Buffer and Group Division, to largely eliminate conflicts with slightly reduced randomness. Our experiments show that in homogeneous environment, Prague is 1.2× faster than the state-of-the-art implementation of All-Reduce, 5.3× faster than Parameter Server and 3.7× faster than AD-PSGD. In a heterogeneous setting, Prague tolerates slowdowns well and achieves 4.4× speedup over All-Reduce.

Cite

CITATION STYLE

APA

Luo, Q., He, J., Zhuo, Y., & Qian, X. (2020). Prague: High-performance heterogeneity-aware asynchronous decentralized training. In International Conference on Architectural Support for Programming Languages and Operating Systems - ASPLOS (pp. 401–416). Association for Computing Machinery. https://doi.org/10.1145/3373376.3378499

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free