Speculative Backpropagation for CNN Parallel Training

14Citations
Citations of this article
30Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

The parallel learning in neural networks can greatly shorten the training time. Its prior efforts were mostly limited to distributing inputs to multiple computing engines. It is because the gradient descent algorithm in the neural network training is inherently sequential. This paper proposes a novel CNN parallel training method for image recognition. It overcomes the sequential property of the gradient descent and enables the parallel training with the speculative backpropagation. We found that the Softmax and ReLU outcomes in the forward propagation for the same labels are likely to be very similar. This characteristic makes it possible to perform the forward and backward propagation simultaneously. We implemented the proposed parallel model with CNNs in both software and hardware, and evaluated its performance. The parallel training reduces the training time by 34% in CIFAR-100 without the loss of the prediction accuracy compared to the sequential training. In many cases, it even improves the accuracy.

Cite

CITATION STYLE

APA

Park, S., & Suh, T. (2020). Speculative Backpropagation for CNN Parallel Training. IEEE Access, 8, 215365–215374. https://doi.org/10.1109/ACCESS.2020.3040849

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free