Development of a coarse-grain parallel algorithm of artificial neural networks training with dynamic mapping onto processors of parallel computer system is considered in this paper. Parallelization of this algorithm done on the computational grid operated under Globus middleware is compared with the results obtained on the parallel computer Origin 300. Experiments show better efficiency for computational grid instead of parallel computer with an efficiency/price criterion. © Springer-Verlag Berlin Heidelberg 2005.
CITATION STYLE
Turchenko, V. (2005). Computational grid vs. parallel computer for coarse-grain parallelization of neural networks training. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 3762 LNCS, pp. 357–366). https://doi.org/10.1007/11575863_55
Mendeley helps you to discover research relevant for your work.