Second-order methods

0Citations
Citations of this article
12Readers
Mendeley users who have this article in their library.
Get full text

Abstract

In this chapter, we study Black-Box second-order methods. In the first two sections, these methods are based on cubic regularization of the second-order model of the objective function. With an appropriate proximal coefficient, this model becomes a global upper approximation of the objective function. At the same time, the global minimum of this approximation is computable in polynomial time even if the Hessian of the objective is not positive semidefinite. We study global and local convergence of the Cubic Newton Method in convex and non-convex cases. In the next section, we derive the lower complexity bounds and show that this method can be accelerated using the estimating sequences technique. In the last section, we consider a modification of the standard Gauss–Newton method for solving systems of nonlinear equations. This modification is also based on an overestimating principle as applied to the norm of the residual of the system. Both global and local convergence results are justified.

Cite

CITATION STYLE

APA

Nesterov, Y. (2018). Second-order methods. In Springer Optimization and Its Applications (Vol. 137, pp. 241–322). Springer International Publishing. https://doi.org/10.1007/978-3-319-91578-4_4

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free