Backwards Differentiation in AD and Neural Nets: Past Links and New Opportunities

29Citations
Citations of this article
74Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Backwards calculation of derivatives - sometimes called the reverse mode, the full adjoint method, or backpropagation - has been developed and applied in many fields. This paper reviews several strands of history, advanced capabilities and types of application - particularly those which are crucial to the development of brain-like capabilities in intelligent control and artificial intelligence.

Cite

CITATION STYLE

APA

Werbos, P. J. (2006). Backwards Differentiation in AD and Neural Nets: Past Links and New Opportunities. Lecture Notes in Computational Science and Engineering. https://doi.org/10.1007/3-540-28438-9_2

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free