Deep neural network with limited numerical precision

0Citations
Citations of this article
2Readers
Mendeley users who have this article in their library.
Get full text

Abstract

In convolution neural networks, digital multiplication operation is the arithmetic operation of the most space-consuming and power consumption. This paper trains convolutional neural network with three different data formats (float point, fixed point and dynamic fixed point) on two different datasets (MNIST, CIFAR-10). For each data set and each data format, the paper assesses the impact of the multiplication accuracy to the error rate at the end of the training. The results show that the network error rate which is trained with low accuracy fixed point has small difference with the network training error rate which is trained with floating point, and this phenomenon shows that the use of low precision can fully meet the training requirements in the process of training the network.

Cite

CITATION STYLE

APA

Cai, Y. X., Liang, C., Tang, Z. W., Li, H. S., & Gong, S. (2018). Deep neural network with limited numerical precision. In Advances in Intelligent Systems and Computing (Vol. 580, pp. 42–50). Springer Verlag. https://doi.org/10.1007/978-3-319-67071-3_8

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free