Nonlinear optimization

0Citations
Citations of this article
582Readers
Mendeley users who have this article in their library.
Get full text

Abstract

In this chapter, we introduce the main notations and concepts used in Continuous Optimization. The first theoretical results are related to Complexity Analysis of the problems of Global Optimization. For these problems, we start with a very pessimistic lower performance guarantee. It implies that for any method there exists an optimization problem in ℝn which needs at least O(1/εn)(Formula presented) computations of the function values in order to approximate its global solution up to accuracy ε. Therefore, in the next section we pass to local optimization, and consider two main methods, the Gradient Method and the Newton Method. For both of them, we establish some local rates of convergence. In the last section, we present some standard methods in General Nonlinear Optimization: the conjugate gradient methods, quasi-Newton methods, theory of Lagrangian relaxation, barrier methods and penalty function methods. For some of them, we prove global convergence results.

Cite

CITATION STYLE

APA

Nesterov, Y. (2018). Nonlinear optimization. In Springer Optimization and Its Applications (Vol. 137, pp. 3–58). Springer International Publishing. https://doi.org/10.1007/978-3-319-91578-4_1

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free