Theano-MPI: A Theano-based distributed training framework

15Citations
Citations of this article
44Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

We develop a scalable and extendable training framework that can utilize GPUs across nodes in a cluster and accelerate the training of deep learning models based on data parallelism. Both synchronous and asynchronous training are implemented in our framework, where parameter exchange among GPUs is based on CUDA-aware MPI. In this report, we analyze the convergence and capability of the framework to reduce training time when scaling the synchronous training of AlexNet and GoogLeNet from 2 GPUs to 8 GPUs. In addition, we explore novel ways to reduce the communication overhead caused by exchanging parameters. Finally, we release the framework as open-source for further research on distributed deep learning (https://github.com/ uoguelph-mlrg/Theano-MPI).

Cite

CITATION STYLE

APA

Ma, H., Mao, F., & Taylor, G. W. (2017). Theano-MPI: A Theano-based distributed training framework. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 10104 LNCS, pp. 800–813). Springer Verlag. https://doi.org/10.1007/978-3-319-58943-5_64

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free