Optimization in relative scale

0Citations
Citations of this article
5Readers
Mendeley users who have this article in their library.
Get full text

Abstract

In many applications, it is difficult to relate the number of iterations in an optimization scheme with the desired accuracy of the solution since the corresponding inequality contains unknown parameters (Lipschitz constant, distance to the optimum). However, in many cases the required level of relative accuracy is quite understandable. To develop methods which compute solutions with relative accuracy, we need to employ internal structure of the problem. In this chapter, we start from problems of minimizing homogeneous objective functions over a convex set separated from the origin. The availability of the subdifferential of this function at zero provides us with a good metric, which can be used in optimization schemes and in the smoothing technique. If this subdifferential is polyhedral, then the metric can be computed by a cheap preliminary rounding process. We also present a barrier subgradient method, which computes an approximate maximum of a positive convex function with certain relative accuracy. We show how to apply this method to solve problems of fractional covering, maximal concurrent flow, semidefinite relaxation, online optimization, portfolio management, and others. Finally, we consider a class of strictly positive functions, for which a kind of quasi-Newton method is developed.

Cite

CITATION STYLE

APA

Nesterov, Y. (2018). Optimization in relative scale. In Springer Optimization and Its Applications (Vol. 137, pp. 489–570). Springer International Publishing. https://doi.org/10.1007/978-3-319-91578-4_7

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free