Continuous function optimisation via gradient descent on a neural network approximation function

3Citations
Citations of this article
2Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Existing neural network approaches to optimisation problems are quite limited in the types of optimisation problems that can be solved. Convergence theorems that utilise Liapunov functions limit the applicability of these techniques to minimising usually quadratic functions only. This paper proposes a new neural network approach that can be used to solve a broad variety of continuous optimisation problems since it makes no assumptions about the nature of the objective function. The approach comprises two stages: first a feedforward neural network is used to approximate the optimisation function based on a sample of evaluated data points; then a feedback neural network is used to perform gradient descent on this approximation function. The final solution is a local minima of the approximated function, which should coincide with true local minima if the learning has been accurate. The proposed method is evaluated on the De Jong test suite: a collection of continuous optimisation problems featuring various characteristics such as saddlepoints, discontinuities, and noise. © Springer-Verlag Berlin Heidelberg 2001.

Cite

CITATION STYLE

APA

Smith, K. A., & Gupta, J. N. D. (2001). Continuous function optimisation via gradient descent on a neural network approximation function. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 2084 LNCS, pp. 741–748). Springer Verlag. https://doi.org/10.1007/3-540-45720-8_89

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free