The development of parallel batch pattern back propagation training algorithm of multilayer perceptron with two hidden layers and its parallelization efficiency research on many-core high performance computing system are presented in this paper. The model of multilayer perceptron and the batch pattern training algorithm are theoretically described. The algorithmic description of the parallel batch pattern training method is presented. Our results show high parallelization efficiency of the developed training algorithm on large scale data classification task on many-core parallel computing system with 48 CPUs using MPI technology. © Springer International Publishing Switzerland 2014.
CITATION STYLE
Turchenko, V., & Sachenko, A. (2014). Efficiency of Parallel Large-Scale Two-Layered MLP Training on Many-Core System. In Communications in Computer and Information Science (Vol. 440, pp. 201–210). Springer Verlag. https://doi.org/10.1007/978-3-319-08201-1_19
Mendeley helps you to discover research relevant for your work.