From Natural to Artificial Neural Computation

  • Han J
  • Moraga C
N/ACitations
Citations of this article
17Readers
Mendeley users who have this article in their library.

Abstract

Sigmoid function is the most commonly known function used in feed forward neural networks because of its nonlinearity and the computational simplicity of its derivative. In this paper we discuss a variant sigmoid function with three parameters that denote the dynamic range, symmetry and slope of the function respectively. We illustrate how these parameters influence the speed of baekpropagation learning and introduce a hybrid sigmoidal network with different parameter configuration in different layers. By regulating and modifying the sigmoid function parameter configuration in different layers the error signal problem, oscillation problem and asymmetrical input problem can be reduced. To compare the learning capabilities and the learning rate of the hybrid sigmoidal networks with the conventional networks we have tested the two-spirals benchmark that is known to be a very difficult task for backpropagation and their relatives.

Cite

CITATION STYLE

APA

Han, J., & Moraga, C. (1995). From Natural to Artificial Neural Computation. From Natural to Artificial Neural Computation, 930, 195–201. Retrieved from http://dx.doi.org/10.1007/3-540-59497-3_175 http://www.springerlink.com/index/10.1007/3-540-59497-3

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free