A class of data-reusing (DR) learning algorithms for real-valued recurrent neural networks (RNNs) employed as nonlinear adaptive filters is extended to the complex domain to give a class of data-reusing learning algorithms for complex valued recurrent neural networks (CRNNs). For rigour, the derivation of the data-reusing complex real time recurrent learning (DRCRTRL) algorithm is undertaken for a general complex activation function. The analysis provides both error bounds and convergence conditions for the case of contractive and expansive complex activation functions. The improved performance of the data-reusing algorithm over the standard one is verified by simulations on prediction of complex valued signals.
CITATION STYLE
Goh, S. L., & Mandic, D. P. (2003). A data-reusing gradient descent algorithm for complex-valued recurrent neural networks. In Lecture Notes in Artificial Intelligence (Subseries of Lecture Notes in Computer Science) (Vol. 2774 PART 2, pp. 340–350). Springer Verlag. https://doi.org/10.1007/978-3-540-45226-3_47
Mendeley helps you to discover research relevant for your work.