Nonsmooth convex optimization

1Citations
Citations of this article
10Readers
Mendeley users who have this article in their library.
Get full text

Abstract

In this chapter, we consider the most general convex optimization problems, which are formed by non-differentiable convex functions. We start by studying the main properties of these functions and the definition of subgradients, which are the main directions used in the corresponding optimization schemes. We also prove the necessary facts from Convex Analysis, including different variants of Minimax Theorems. After that, we establish the lower complexity bounds and prove the convergence rate of the Subgradient Method for constrained and unconstrained optimization problems. This method appears to be optimal uniformly in the dimension of the space of variables. In the next section, we consider other optimization methods, which can work in spaces of moderate dimension (the Method of Centers of Gravity, the Ellipsoid Algorithm). The chapter concludes with a presentation of methods based on a complete piece-wise linear model of the objective function (Kelley’s method, the Level Method).

Cite

CITATION STYLE

APA

Nesterov, Y. (2018). Nonsmooth convex optimization. In Springer Optimization and Its Applications (Vol. 137, pp. 139–240). Springer International Publishing. https://doi.org/10.1007/978-3-319-91578-4_3

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free