Neural networks, as powerful models for many difficult learning tasks, have created an increasingly heavy computational burden. More and more researchers focus on how to optimize the training time, and one of the difficulties is to establish a general iteration time prediction model. However, the existing models have high complexity or tedious build processes, and there is still space for improvement in prediction accuracy. Moreover, there is little systematic analysis of multi-GPU which is a special and widely used scenario. In this paper, we introduce a framework to analyze the training time for convolutional neural networks (CNNs) on multi-GPU platforms. Based on the analysis of GPU calculation principles and its special transmission mode, our framework decomposes the model and obtain accurate prediction results without long-term training or complex data collection. We start by extracting key feature parameters related to GPUs, CNNs, and networks. Then, we map CNN architectures to constraints, including software platforms, GPU platforms, parallel strategies, and communication strategies. At last, we provide the prediction model and give analysis results of training time from multiple perspectives. The proposed model is verified on four types of NVIDIA GPU platforms and six different CNN architectures. The experiment results show that the average error across varies scenarios is less than 15% and outperform the state-of-the-art results by 5%-30%, which corroborate our model an effective tool for artificial intelligence (AI) researchers.
CITATION STYLE
Pei, Z., Li, C., Qin, X., Chen, X., & Wei, G. (2019). Iteration time prediction for CNN in Multi-GPU Platform: Modeling and analysis. IEEE Access, 7, 64788–64797. https://doi.org/10.1109/ACCESS.2019.2916550
Mendeley helps you to discover research relevant for your work.