Nonmonotone learning of recurrent neural networks in symbolic sequence processing applications

1Citations
Citations of this article
5Readers
Mendeley users who have this article in their library.
Get full text

Abstract

In this paper, we present a formulation of the learning problem that allows deterministic nonmonotone learning behaviour to be generated, i.e. the values of the error function are allowed to increase temporarily although learning behaviour is progressively improved. This is achieved by introducing a nonmonotone strategy on the error function values. We present four training algorithms which are equipped with nonmonotone strategy and investigate their performance in symbolic sequence processing problems. Experimental results show that introducing nonmonotone mechanism can improve traditional learning strategies and make them more effective in the sequence problems tested. © 2009 Springer-Verlag.

Cite

CITATION STYLE

APA

Peng, C. C., & Magoulas, G. D. (2009). Nonmonotone learning of recurrent neural networks in symbolic sequence processing applications. In Communications in Computer and Information Science (Vol. 43 CCIS, pp. 325–335). https://doi.org/10.1007/978-3-642-03969-0_30

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free