Using Automatic Differentiation for Second-Order Matrix-free Methods in PDE-constrained Optimization

  • Keyes D
  • Hovland P
  • McInnes L
  • et al.
N/ACitations
Citations of this article
11Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Classical methods of constrained optimization are often based on the assumptions that projection onto the constraint manifold is routine, but accessing second-derivative information is not. Both assumptions need revision for the application of optimization to systems constrained by partial differential equations, in the contemporary limit of millions of state variables and in the parallel setting. Large-scale PDE solvers are complex pieces of software that exploit detailed knowledge of architecture and application and cannot easily be modified to fit the interface requirements of a black box optimizer. Furthermore, in view of the expense of PDE analyses, optimization methods not using second derivatives may require too many iterations to be practical. For general problems, automatic differentiation is likely to be the most convenient means of exploiting second derivatives. We delineate a role for automatic differentiation in matrix-free optimization formulations involving Newton's method, in which little more storage is required than that for the analysis code alone.

Cite

CITATION STYLE

APA

Keyes, D. E., Hovland, P. D., McInnes, L. C., & Samyono, W. (2002). Using Automatic Differentiation for Second-Order Matrix-free Methods in PDE-constrained Optimization. In Automatic Differentiation of Algorithms (pp. 35–50). Springer New York. https://doi.org/10.1007/978-1-4613-0075-5_3

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free