Hardware computation of large-scale neural networks models is a challenge in terms of calculation precision, data bandwidth, memory capacity and overall general system performance. Proposed reduced precision computations at Half, Mini and Nibble floating-point formats are dedicated mainly to systems with computational power deficits in order to increase efficiency and to maximize hardware resources utilization. The study examined 10 neural networks models of varying architecture (MLP, NAR) and purposes (fitting, classification, recognition, and prediction). The number representation for all ANN models were downgraded to Half (16-bit), Mini (8 bit) and Nibble (4 bit) floating-point precision. For each reduced precision ANN, statistical estimators such as MAE, MSE and Supremum were calculated. Particular attention was devoted to pattern recognition networks in which object classification skills have been preserved despite an extreme precision reduction (4 bits). In addition, the paper shortly describes ANN bandwidth demands, a reduced precision format development and achieved performance gains.
Mendeley helps you to discover research relevant for your work.
CITATION STYLE
Pietras, M. (2015). Error analysis in the hardware neural networks applications using reduced floating-point numbers representation. In AIP Conference Proceedings (Vol. 1648). American Institute of Physics Inc. https://doi.org/10.1063/1.4912881