Distributed training large-scale deep architectures

10Citations
Citations of this article
33Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Scale of data and scale of computation infrastructures together enable the current deep learning renaissance. However, training large-scale deep architectures demands both algorithmic improvement and careful system configuration. In this paper, we focus on employing the system approach to speed up large-scale training. Taking both the algorithmic and system aspects into consideration, we develop a procedure for setting mini-batch size and choosing computation algorithms. We also derive lemmas for determining the quantity of key components such as the number of GPUs and parameter servers. Experiments and examples show that these guidelines help effectively speed up large-scale deep learning training.

Cite

CITATION STYLE

APA

Zou, S. X., Chen, C. Y., Wu, J. L., Chou, C. N., Tsao, C. C., Tung, K. C., … Chang, E. Y. (2017). Distributed training large-scale deep architectures. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 10604 LNAI, pp. 18–32). Springer Verlag. https://doi.org/10.1007/978-3-319-69179-4_2

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free