Variance algorithm for minimization

517Citations
Citations of this article
31Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

An algorithm is presented for minimizing real valued differentiable functions on an N-dimensional manifold. In each iteration, the value of the function and its gradient are computed just once, and used to form new estimates for the location of the minimum and the variance matrix (i.e. the inverse of the matrix of second derivatives). A proof is given for convergence within N-iterations to the exact minimum and variance matrix for quadratic functions. Whether or not the function is quadratic, each iteration begins at the point where the function has the least of all past computed values. © 1968 The British Computer Society.

Cite

CITATION STYLE

APA

Davidon, W. C. (1968). Variance algorithm for minimization. Computer Journal, 10(4), 406–410. https://doi.org/10.1093/comjnl/10.4.406

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free