Global feedforward neural network learning for classification and regression

4Citations
Citations of this article
2Readers
Mendeley users who have this article in their library.
Get full text

Abstract

This paper addresses the issues of global optimality and training of a Feedforward Neural Network (FNN) error funtion incoporating the weight decay regularizer. A network with a single hidden-layer and a single output-unit is considered. Explicit vector and matrix canonical forms for the Jacobian and Hessian of the network are presented. Convexity analysis is then performed utilizing the known canonical structure of the Hessian. Next, global optimality characterization of the FNN error function is attempted utilizing the results of convex characterization and a convex monotonic transformation. Based on this global optimality characterization, an iterative algorithm is proposed for global FNN learning. Numerical experiments with benchmark examples show better convergence of our network learning as compared to many existing methods in the literature. The network is also shown to gener­alize well for a face recognition problem.

Cite

CITATION STYLE

APA

Toh, K. A., Lu, J., & Yau, W. Y. (2001). Global feedforward neural network learning for classification and regression. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 2134, pp. 407–422). Springer Verlag. https://doi.org/10.1007/3-540-44745-8_27

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free