A novel neural network parallel adder

0Citations
Citations of this article
7Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Addition is the most commonly used arithmetic operation and is the speed-limiting element in the core of arithmetic logic unit (ALU) in a microprocessor. Perceptron of feedforward neural networks, inspired by the threshold logic unit neuron model of McCulloch and Pitts, is one of the most important aspects of artificial neural networks (ANN). This paper proposes a design of neural network parallel adder (NNPA) under the framework of multi-layer perceptron (MLP) of binary feedforward neural networks (BFNN). The DNA-like learning algorithm proposed by the present authors is successfully used for training the weight-threshold values of NNPA. Moreover, the efficiency of NNPA is compared with that of the conventional adder such as carry-ripple adder and carry-look-ahead adder. It is shown that some advantages of ANN such as synchronous, parallel and fast speed in information processing are sufficiently taken by the current NNPA. © 2013 Springer-Verlag Berlin Heidelberg.

Cite

CITATION STYLE

APA

Chen, F., Wang, G., Chen, G., & He, Q. (2013). A novel neural network parallel adder. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 7902 LNCS, pp. 538–546). https://doi.org/10.1007/978-3-642-38679-4_54

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free