In the paper we provide thorough benchmarking of deep neural network (DNN) training on modern multi- and many-core Intel processors in order to assess performance differences for various deep learning as well as parallel computing parameters. We present performance of DNN training for Alexnet, Googlenet, Googlenet_v2 as well as Resnet_50 for various engines used by the deep learning framework, for various batch sizes. Furthermore, we measured results for various numbers of threads with ranges depending on a given processor(s) as well as compact and scatter affinities. Based on results we formulate conclusions with respect to optimal parameters and relative performances which can serve as hints for researchers training similar networks using modern processors.
CITATION STYLE
Jabłońska, K., & Czarnul, P. (2020). Benchmarking Deep Neural Network Training Using Multi- and Many-Core Processors. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 12133 LNCS, pp. 230–242). Springer. https://doi.org/10.1007/978-3-030-47679-3_20
Mendeley helps you to discover research relevant for your work.