Backpropagation analysis of the limited precision on high-order function neural networks

2Citations
Citations of this article
2Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Quantization analysis of the limited precision is widely used in the hardware realization of neural networks. Due to the most neural computations are required in the training phase, the effects of quantization are more significant in this phase. We pay attention and analyze backpropagation training and recall of the limited precision on the HOFNN, point out the potential problems and the performance sensitivity with lower-bit quantization. We compare the training performances with and without weight clipping, derive the effects of the quantization error on backpropagation for on-chip and off-chip training. Our experimental simulation results verify the presented theoretical analysis. © Springer-Verlag 2004.

Cite

CITATION STYLE

APA

Jiang, M., & Gielen, G. (2004). Backpropagation analysis of the limited precision on high-order function neural networks. Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 3173, 305–310. https://doi.org/10.1007/978-3-540-28647-9_52

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free