Semiautomatic Differentiation for Efficient Gradient Computations

15Citations
Citations of this article
26Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Many large-scale computations involve a mesh and first (or sometimes higher) partial derivatives of functions of mesh elements. In principle, automatic differentiation (AD) can provide the requisite partials more efficiently and accurately than conventional finite-difference approximations. AD requires source-code modifications, which may be little more than changes to declarations. Such simple changes can easily give improved results, e.g., when Jacobian-vector products are used iteratively to solve nonlinear equations. When gradients are required (say, for optimization) and the problem involves many variables, "backward" AD in theory is very efficient, but when carried out automatically and straightforwardly, may use a prohibitive amount of memory. In this case, applying AD separately to each element function and manually assembling the gradient pieces - semiautomatic differentiation - can deliver gradients efficiently and accurately. This paper concerns on-going work; it compares several implementations of backward AD, describes a simple operator-overloading implementation specialized for gradient computations, and compares the implementations on some mesh-optimization examples. Ideas from the specialized implementation could be used in fully general source-to-source translators for C and C++.

Cite

CITATION STYLE

APA

Gay, D. M. (2006). Semiautomatic Differentiation for Efficient Gradient Computations. Lecture Notes in Computational Science and Engineering, 50, 147–158. https://doi.org/10.1007/3-540-28438-9_13

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free