Abstract
An algorithm is presented for minimizing real valued differentiable functions on an N-dimensional manifold. In each iteration, the value of the function and its gradient are computed just once, and used to form new estimates for the location of the minimum and the variance matrix (i.e. the inverse of the matrix of second derivatives). A proof is given for convergence within N-iterations to the exact minimum and variance matrix for quadratic functions. Whether or not the function is quadratic, each iteration begins at the point where the function has the least of all past computed values. © 1968 The British Computer Society.
Cite
CITATION STYLE
Davidon, W. C. (1968). Variance algorithm for minimization. Computer Journal, 10(4), 406–410. https://doi.org/10.1093/comjnl/10.4.406
Register to see more suggestions
Mendeley helps you to discover research relevant for your work.