Throughput prediction of asynchronous SGD in tensorflow

5Citations
Citations of this article
8Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Modern machine learning frameworks can train neural networks using multiple nodes in parallel, each computing parameter updates with stochastic gradient descent (SGD) and sharing them asynchronously through a central parameter server. Due to communication overhead and bottlenecks, the total throughput of SGD updates in a cluster scales sublinearly, saturating as the number of nodes increases. In this paper, we present a solution to predicting training throughput from profiling traces collected from a single-node configuration. Our approach is able to model the interaction of multiple nodes and the scheduling of concurrent transmissions between the parameter server and each node. By accounting for the dependencies between received parts and pending computations, we predict overlaps between computation and communication and generate synthetic execution traces for configurations with multiple nodes. We validate our approach on TensorFlow training jobs for popular image classification neural networks, on AWS and on our in-house cluster, using nodes equipped with GPUs or only with CPUs. We also investigate the effects of data transmission policies used in TensorFlow and the accuracy of our approach when combined with optimizations of the transmission schedule.

Cite

CITATION STYLE

APA

Li, Z., Yan, W., Paolieri, M., & Golubchik, L. (2020). Throughput prediction of asynchronous SGD in tensorflow. In ICPE 2020 - Proceedings of the ACM/SPEC International Conference on Performance Engineering (pp. 76–87). Association for Computing Machinery, Inc. https://doi.org/10.1145/3358960.3379141

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free