The field of artificial neural networks has developed alongside many disciplines, such as neurobiology, mathematics, statistics, economics, computer science, engineering and physics, to mention but a few. Consequently, the terminology used in the field varies from discipline to discipline. An initiative from the IEEE Neural Networks Council to standardise the terminology has resulted in recommended terminology and several definitions (Eberhart 1990). We present four of them. Activation Function. Algorithm for computing the activation value of a neurode as a function of its net input. Net input is typically the sum of weighted inputs to the neurode. Feedforward Network. Network ordered into layers with no feedback paths. The lowest layer is the input layer, the highest is the output layer. The outputs of a given layer go only to higher layers and its inputs come only from lower layers. Supervised Learning. Learning procedure in which a network is presented with a set of input pattern and target pairs. The network can compare its output to the target and adapt itself according to the learning rules. Unsupervised Learning. Learning procedure in which the network is presented with a set of input patterns. The network adapts itself according to the statistical associations in the input patterns. A later effort by Fiesler (1994), which appeared in Computer Standards and Inter-faces, further tackles this issue. In particular, Fiesler considers functions important for neural networks, such as 1. neuron functions (or transfer functions) which specify the output of a neuron, given its inputs (this includes nonlinearity); 2. learning rules (or learning laws) which define how weights (and offsets) will be updated;
CITATION STYLE
Mandic, D. P., & Chambers, J. A. (2001). Appendix E: Terminology Used in the Field of Neural Networks. In Recurrent Neural Networks for Prediction (pp. 239–240). Wiley. https://doi.org/10.1002/047084535x.app5
Mendeley helps you to discover research relevant for your work.