In this paper, we present a formulation of the learning problem that allows deterministic nonmonotone learning behaviour to be generated, i.e. the values of the error function are allowed to increase temporarily although learning behaviour is progressively improved. This is achieved by introducing a nonmonotone strategy on the error function values. We present four training algorithms which are equipped with nonmonotone strategy and investigate their performance in symbolic sequence processing problems. Experimental results show that introducing nonmonotone mechanism can improve traditional learning strategies and make them more effective in the sequence problems tested. © 2009 Springer-Verlag.
CITATION STYLE
Peng, C. C., & Magoulas, G. D. (2009). Nonmonotone learning of recurrent neural networks in symbolic sequence processing applications. In Communications in Computer and Information Science (Vol. 43 CCIS, pp. 325–335). https://doi.org/10.1007/978-3-642-03969-0_30
Mendeley helps you to discover research relevant for your work.