A low rank approach to automatic differentiation

5Citations
Citations of this article
14Readers
Mendeley users who have this article in their library.
Get full text

Abstract

This manuscript introduces a new approach for increasing the efficiency of automatic differentiation (AD) computations for estimating the first order derivatives comprising the Jacobian matrix of a complex large-scale computational model. The objective is to approximate the entire Jacobian matrix with minimized computational and storage resources. This is achieved by finding low rank approximations to a Jacobian matrix via the Efficient Subspace Method (ESM). Low rank Jacobian matrices arise in many of today's important scientific and engineering problems, e.g. nuclear reactor calculations, weather climate modeling, geophysical applications, etc. A low rank approximation replaces the original Jacobian matrix J (whose size is dictated by the size of the input and output data streams) with matrices of much smaller dimensions (determined by the numerical rank of the Jacobian matrix). This process reveals the rank of the Jacobian matrix and can be obtained by ESM via a series of r randomized matrix-vector products of the form: Jq, and JT ω which can be evaluated by the AD forward and reverse modes, respectively. © 2008 Springer-Verlag Berlin Heidelberg.

Cite

CITATION STYLE

APA

Abdel-Khalik, H. S., Hovland, P. D., Lyons, A., Stover, T. E., & Utke, J. (2008). A low rank approach to automatic differentiation. In Lecture Notes in Computational Science and Engineering (Vol. 64 LNCSE, pp. 55–65). https://doi.org/10.1007/978-3-540-68942-3_6

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free