Scalability of enhanced parallel batch pattern BP training algorithm on general-purpose supercomputers

7Citations
Citations of this article
3Readers
Mendeley users who have this article in their library.
Get full text

Abstract

The development of an enhanced parallel algorithm for batch pattern training of a multilayer perceptron with the back propagation training algorithm and the research of its efficiency on general-purpose parallel computers are presented in this paper. An algorithmic description of the parallel version of the batch pattern training method is described. Several technical solutions which lead to enhancement of the parallelization efficiency of the algorithm are discussed. The efficiency of parallelization of the developed algorithm is investigated by progressively increasing the dimension of the parallelized problem on two general-purpose parallel computers. The results of the experimental researches show that (i) the enhanced version of the parallel algorithm is scalable and provides better parallelization efficiency than the old implementation; (ii) the parallelization efficiency of the algorithm is high enough for an efficient use of this algorithm on general-purpose parallel computers available within modern computational grids. © 2010 Springer-Verlag Berlin Heidelberg.

Cite

CITATION STYLE

APA

Turchenko, V., & Grandinetti, L. (2010). Scalability of enhanced parallel batch pattern BP training algorithm on general-purpose supercomputers. In Advances in Intelligent and Soft Computing (Vol. 79, pp. 525–532). https://doi.org/10.1007/978-3-642-14883-5_67

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free