Convergence of an online split-complex gradient algorithm for complex-valued neural networks

3Citations
Citations of this article
9Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

The online gradient method has been widely used in training neural networks. We consider in this paper an online split-complex gradient algorithm for complex-valued neural networks. We choose an adaptive learning rate during the training procedure. Under certain conditions, by firstly showing the monotonicity of the error function, it is proved that the gradient of the error function tends to zero and the weight sequence tends to a fixed point. A numerical example is given to support the theoretical findings. © 2010 Huisheng Zhang et al.

Cite

CITATION STYLE

APA

Zhang, H., Xu, D., & Wang, Z. (2010). Convergence of an online split-complex gradient algorithm for complex-valued neural networks. Discrete Dynamics in Nature and Society, 2010. https://doi.org/10.1155/2010/829692

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free