Backprop

N/ACitations
Citations of this article
31Readers
Mendeley users who have this article in their library.
Get full text

Abstract

We describe a new learning procedure, back-propagation, for networks of neurone-like units. The procedure repeatedly adjusts the weights of the connections in the network so as to minimize a measure of the difference between the actual output vector of the net and the desired output vector. As a result of the weight adjustments, internal ‘hidden' units which are not part of the input or output come to represent important features of the task domain, and the regularities in the task are captured by the interactions of these units. The ability to create useful new features distinguishes back-propagation from earlier, simpler methods such as the perceptron-convergence procedure.

Cite

CITATION STYLE

APA

Backprop. (2017). In Encyclopedia of Machine Learning and Data Mining (pp. 93–93). Springer US. https://doi.org/10.1007/978-1-4899-7687-1_100030

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free