Methods for constrained optimization described in this chapter can be broadly classified as constraint-following methods or penalty function methods. The gradient projection method and the generalized reduced gradient method are both constraint-following methods, on the basis that the optimum will lie on some or many constraints, and the aim is therefore to follow the constraints as closely as possible around the design space. In the gradient projection method, only those constraints currently active are included at any stage. The best search direction is found on the intersection of those constraints. Due to constraint nonlinearity, constraint gradients have to be re-evaluated at each step, and the process continued. In the generalized reduced gradient method, one of the methods in Solver, instead of an active constraint strategy surplus variables are added to convert inequality constraints into equalities. A search direction is then obtained from the reduced gradient in a set of independent variables. Again, constraint gradients have to be re-evaluated at each step. In a penalty function method, terms containing the constraint functions are added to the objective function to convert it in effect into an unconstrained problem, the aim being to avoid constraints or to penalize constraint violation. By the increase or decrease of a penalty parameter, the solution converges to the optimum of the constrained problem. A spreadsheet program for the penalty function method is based on the Hooke and Jeeves method in the previous chapter.
CITATION STYLE
Rothwell, A. (2017). Numerical methods for constrained optimization. In Solid Mechanics and its Applications (Vol. 242, pp. 107–145). Springer Verlag. https://doi.org/10.1007/978-3-319-55197-5_5
Mendeley helps you to discover research relevant for your work.