Back Propagation Neural Network(BPNN) and Sigmoid Activation Function in Multi-Layer Networks

  • Asaad R
  • Ali R
N/ACitations
Citations of this article
48Readers
Mendeley users who have this article in their library.

Abstract

Back propagation neural network are known for computing the problems that cannot easily be computed (huge datasets analysis or training) in artificial neural networks. The main idea of this paper is to implement XOR logic gate by ANNs using back propagation neural network for back propagation of errors, and sigmoid activation function. This neural network to map non-linear threshold gate. The non-linear used to classify binary inputs (x1, x2) and passing it through hidden layer for computing coefficient_errors and gradient_errors (Cerrors, Gerrors), after computing errors by (ei = Output_desired- Output_actual) the weights and thetas (ΔWji = (α)(Xj)(gi), Δϴj = (α)(-1)(gi)) are changing according to errors. Sigmoid activation function is = sig(x)=1/(1+e-x) and Derivation of sigmoid is = dsig(x) = sig(x)(1-sig(x)). The sig(x) and Dsig(x) is between 1 to 0.

Cite

CITATION STYLE

APA

Asaad, R. R., & Ali, R. I. (2019). Back Propagation Neural Network(BPNN) and Sigmoid Activation Function in Multi-Layer Networks. Academic Journal of Nawroz University, 8(4), 216–221. https://doi.org/10.25007/ajnu.v8n4a464

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free