Lagrangian relaxation via ballstep subgradient methods

29Citations
Citations of this article
26Readers
Mendeley users who have this article in their library.
Get full text

Abstract

We exhibit useful properties of ballstep subgradient methods for convex optimization using level controls for estimating the optimal value. Augmented with simple averaging schemes, they asymptotically find objective and constraint subgradients involved in optimality conditions. When applied to Lagrangian relaxation of convex programs, they find both primal and dual solutions, and have practicable stopping criteria. Up until now, similar results have only been known for proximal bundle methods, and for subgradient methods with divergent series stepsizes, whose convergence can be slow. Encouraging numerical results are presented for large-scale nonlinear multicommodity network flow problems. ©2007 INFORMS.

Cite

CITATION STYLE

APA

Kiwiel, K. C., Larsson, T., & Lindberg, P. O. (2007). Lagrangian relaxation via ballstep subgradient methods. Mathematics of Operations Research, 32(3), 669–686. https://doi.org/10.1287/moor.1070.0261

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free