Performance analysis of various activation functions in generalized MLP architectures of neural networks

  • Karlik B
  • Olgac A
ISSN: 2180124X
N/ACitations
Citations of this article
465Readers
Mendeley users who have this article in their library.

Abstract

The activation function used to transform the activati on level of a unit (neuron) into an output signal. There are a number of common activation functi ons in use with artificial neural networks (ANN). The most common choice of activation functions fo r multi layered perceptron (MLP) is used as transfer functions in research and engineerin g. Among the reasons for this popularity are its boundedness in the unit interval, the function’s and its derivative’s fast computability, and a number of amenable mathematical properties in the re alm of approximation theory. However, considering the huge variety of problem domains MLP i s applied in, it is intriguing to suspect that specific problems call for single or a set of specific a ctivation functions. The aim of this study is to analyze the performance of generalized MLP architectures which has back-propagation algorithm using various different activation functions for the neurons of hidden and output layers. For experimental comparisons, Bi-polar sigmoid, Uni-pola r sigmoid, Tanh, Conic Section, and Radial Bases Function (RBF) were used.

Cite

CITATION STYLE

APA

Karlik, B., & Olgac, A. (2010). Performance analysis of various activation functions in generalized MLP architectures of neural networks. International Journal of Artificial Intelligence and Expert Systems (IJAE), 1(4), 111–122. Retrieved from http://www.cscjournals.org/csc/manuscript/Journals/IJAE/volume1/Issue4/IJAE-26.pdf

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free