Efficiency of Parallel Large-Scale Two-Layered MLP Training on Many-Core System

2Citations
Citations of this article
3Readers
Mendeley users who have this article in their library.
Get full text

Abstract

The development of parallel batch pattern back propagation training algorithm of multilayer perceptron with two hidden layers and its parallelization efficiency research on many-core high performance computing system are presented in this paper. The model of multilayer perceptron and the batch pattern training algorithm are theoretically described. The algorithmic description of the parallel batch pattern training method is presented. Our results show high parallelization efficiency of the developed training algorithm on large scale data classification task on many-core parallel computing system with 48 CPUs using MPI technology. © Springer International Publishing Switzerland 2014.

Cite

CITATION STYLE

APA

Turchenko, V., & Sachenko, A. (2014). Efficiency of Parallel Large-Scale Two-Layered MLP Training on Many-Core System. In Communications in Computer and Information Science (Vol. 440, pp. 201–210). Springer Verlag. https://doi.org/10.1007/978-3-319-08201-1_19

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free