A conjugate gradient method with global convergence for large-scale unconstrained optimization problems

10Citations
Citations of this article
17Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

The conjugate gradient (CG) method has played a special role in solving large-scale nonlinear optimization problems due to the simplicity of their very low memory requirements. This paper proposes a conjugate gradient method which is similar to Dai-Liao conjugate gradient method (Dai and Liao, 2001) but has stronger convergence properties. The given method possesses the sufficient descent condition, and is globally convergent under strong Wolfe-Powell (SWP) line search for general function. Our numerical results show that the proposed method is very efficient for the test problems. © 2013 Shengwei Yao et al.

Cite

CITATION STYLE

APA

Yao, S., Lu, X., & Wei, Z. (2013). A conjugate gradient method with global convergence for large-scale unconstrained optimization problems. Journal of Applied Mathematics, 2013. https://doi.org/10.1155/2013/730454

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free