We derive cost formulae for three different parallelisation techniques for training supervised networks. These formulae are parameterised by properties of the target computer architecture. It is therefore possible to decide the best match between parallel computer and training technique. One technique, exemplar parallelism, is far superior for almost all parallel computer architectures. Formulae also take into account optimal batch learning as the overall training approach.
CITATION STYLE
Rogers, R. O., & Skillicorn, D. B. (1998). Using the BSP cost model to optimise parallel neural network training. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 1388, pp. 297–305). Springer Verlag. https://doi.org/10.1007/3-540-64359-1_700
Mendeley helps you to discover research relevant for your work.