Various forms of Deep Neural Network (DNN) architectures are used as Deep Learning tools for neural inspired computational systems. The computational power, the bandwidth and the energy requested by the current developments of the domain are very high. The solutions offered by the current architectural environment are far from being efficient. We propose a hybrid computational system for running efficiently the training and inference DNN algorithms. The system is more energy efficient compared with the current solutions, and achieves a higher actual performance per peak performance ratio. The accelerator part of our heterogeneous system is a programmable many-core system with a Map-Scan/Reductive only the cells where architecture. The chapter describes and evaluates the proposed accelerator for the main computational intensive components of a DNN: The fully connected layer, the convolution layer, the pooling layer, and the softmax layer.
CITATION STYLE
Maliţa, M., Popescu, G. V., & Ştefan, G. M. (2020). Heterogeneous Computing System for Deep Learning. In Studies in Computational Intelligence (Vol. 866, pp. 287–319). Springer Verlag. https://doi.org/10.1007/978-3-030-31756-0_10
Mendeley helps you to discover research relevant for your work.