Fast training of deep learning models over multiple GPUs

16Citations
Citations of this article
12Readers
Mendeley users who have this article in their library.
Get full text

Abstract

This paper proposes FastT, a transparent module to work with the TensorFlow framework for automatically identifying a satisfying deployment and execution order of operations in DNN models over multiple GPUs, for expedited model training. We propose white-box algorithms to compute the strategies with small computing resource consumption in a short time. Recently, similar studies have been done to optimize device placement using reinforcement learning. Compared to those works which learn to optimize device placement of operations in several hours using large amounts of computing resources, our approach can find excellent device placement and execution order within minutes using the same computing node as for training. We design a list of scheduling algorithms to compute the device placement and execution order for each operation and also design an algorithm to split operations in the critical path to support fine-grained (mixed) data and model parallelism to further improve the training speed in each iteration. We compare FastT with representative strategies and obtain insights on the best strategies for training different types of DNN models based on extensive testbed experiments.

Cite

CITATION STYLE

APA

Yi, X., Wang, M., Luo, Z., Long, G., Meng, C., Wu, C., … Lin, W. (2020). Fast training of deep learning models over multiple GPUs. In Middleware 2020 - Proceedings of the 2020 21st International Middleware Conference (pp. 105–118). Association for Computing Machinery, Inc. https://doi.org/10.1145/3423211.3425675

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free