Using the BSP cost model to optimise parallel neural network training

2Citations
Citations of this article
8Readers
Mendeley users who have this article in their library.
Get full text

Abstract

We derive cost formulae for three different parallelisation techniques for training supervised networks. These formulae are parameterised by properties of the target computer architecture. It is therefore possible to decide the best match between parallel computer and training technique. One technique, exemplar parallelism, is far superior for almost all parallel computer architectures. Formulae also take into account optimal batch learning as the overall training approach.

Cite

CITATION STYLE

APA

Rogers, R. O., & Skillicorn, D. B. (1998). Using the BSP cost model to optimise parallel neural network training. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 1388, pp. 297–305). Springer Verlag. https://doi.org/10.1007/3-540-64359-1_700

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free