We discuss the convergence of line search methods for minimization. We explain how Newton's method and the BFGS method can fail even if the restrictions of the objective function to the search lines are strictly convex functions, the level sets of the objective functions are compact, the line searches are exact and the Wolfe conditions are satisfied. This explanation illustrates a new way to combine general mathematical concepts and symbolic computation to analyze the convergence of line search methods. It also illustrate the limitations of the asymptotic analysis of the iterates of nonlinear programming algorithms. Copyright © 2007 SBMAC.
CITATION STYLE
Mascarenhas, W. F. (2007). On the divergence of line search methods. Computational and Applied Mathematics, 26(1), 129–169. https://doi.org/10.1590/s1807-03022007000100006
Mendeley helps you to discover research relevant for your work.