The data-flow equations of checkpointing in reverse Automatic Differentiation

24Citations
Citations of this article
20Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

Checkpointing is a technique to reduce the memory consumption of adjoint programs produced by reverse Automatic Differentiation. However, checkpointing also uses a non-negligible memory space for the so-called "snapshots". We analyze the data-flow of checkpointing, yielding a precise characterization of all possible memory-optimal options for snapshots. This characterization is formally derived from the structure of checkpoints and from classical data-flow equations. In particular, we select two very different options and study their behavior on a number of real codes. Although no option is uniformly better, the so-called "lazy-snapshot" option appears preferable in general. © Springer-Verlag Berlin Heidelberg 2006.

Cite

CITATION STYLE

APA

Dauvergne, B., & Hascoët, L. (2006). The data-flow equations of checkpointing in reverse Automatic Differentiation. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 3994 LNCS-IV, pp. 566–573). Springer Verlag. https://doi.org/10.1007/11758549_78

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free