Error analysis in the hardware neural networks applications using reduced floating-point numbers representation

6Citations
Citations of this article
8Readers
Mendeley users who have this article in their library.

Abstract

Hardware computation of large-scale neural networks models is a challenge in terms of calculation precision, data bandwidth, memory capacity and overall general system performance. Proposed reduced precision computations at Half, Mini and Nibble floating-point formats are dedicated mainly to systems with computational power deficits in order to increase efficiency and to maximize hardware resources utilization. The study examined 10 neural networks models of varying architecture (MLP, NAR) and purposes (fitting, classification, recognition, and prediction). The number representation for all ANN models were downgraded to Half (16-bit), Mini (8 bit) and Nibble (4 bit) floating-point precision. For each reduced precision ANN, statistical estimators such as MAE, MSE and Supremum were calculated. Particular attention was devoted to pattern recognition networks in which object classification skills have been preserved despite an extreme precision reduction (4 bits). In addition, the paper shortly describes ANN bandwidth demands, a reduced precision format development and achieved performance gains.

References Powered by Scopus

Neural networks for classification: A survey

1465Citations
N/AReaders
Get full text

Recurrent Neural Networks and Robust Time Series Prediction

1056Citations
N/AReaders
Get full text

Artificial neural networks in hardware: A survey of two decades of progress

546Citations
N/AReaders
Get full text

Cited by Powered by Scopus

EERA-ASR: An Energy-Efficient Reconfigurable Architecture for Automatic Speech Recognition with Hybrid DNN and Approximate Computing

40Citations
N/AReaders
Get full text

An ultra-low power always-on keyword spotting accelerator using quantized convolutional neural network and voltage-domain analog switching network-based approximate computing

19Citations
N/AReaders
Get full text

An energy-efficient voice activity detector using deep neural networks and approximate computing

18Citations
N/AReaders
Get full text

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Cite

CITATION STYLE

APA

Pietras, M. (2015). Error analysis in the hardware neural networks applications using reduced floating-point numbers representation. In AIP Conference Proceedings (Vol. 1648). American Institute of Physics Inc. https://doi.org/10.1063/1.4912881

Readers' Seniority

Tooltip

PhD / Post grad / Masters / Doc 2

100%

Readers' Discipline

Tooltip

Computer Science 2

67%

Engineering 1

33%

Save time finding and organizing research with Mendeley

Sign up for free