Abstract
An algorithm for solving the problem of minimizing a nonlinear function subject to equality constraints is introduced. This algorithm is a trust-region algorithm. In computing the trial step, a projected-Hessian technique is used that converts the trust-region subproblem to one similar to that for the unconstrained case. To force global convergence, the augmented Lagrangian is employed as a merit function. One of the main advantages of this algorithm is the way that the penalty parameter is updated. We introduce an updating scheme that allows (for the first time, to the best of our knowledge) the penalty parameter to be decreased whenever it is warranted. The behavior of this penalty parameter is studied. A convergence theory for this algorithm is presented. It is shown that this algorithm is globally convergent and that the globalization strategy will not disrupt fast local convergence. The local rate of convergence is also discussed. This theory is sufficiently general so that it holds for any algorithm that generates steps whose normal components give at least a fraction of Cauchy decrease in the quadratic model of the constraints and uses Fletcher’s exact penalty function as a merit function.
Cite
CITATION STYLE
El-Alem, M. (1995). A Robust Trust-Region Algorithm with a Nonmonotonic Penalty Parameter Scheme for Constrained Optimization. SIAM Journal on Optimization, 5(2), 348–378. https://doi.org/10.1137/0805018
Register to see more suggestions
Mendeley helps you to discover research relevant for your work.