Learning input features representations in deep learning

1Citations
Citations of this article
4Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Traditionally, when training supervised classifiers with Backpropagation, the training dataset is a static representation of the learning environment. The error on this training set is then propagated backwards to all the layers, and the gradient of the error with respect to the classifiers parameters is used to update them. However, this process stops when the parameters between the input layer and the next layer are updated. We note that there is a residual error that could be propagated further backwards to the feature vector(s) in order to adapt the representation of the input features, and that using this residual error can lead to improved speed of convergence towards a generalised solution. We present a methodology for applying this new technique to Deep Learning methods, such as Deep Neural Networks and Convolutional Neural Networks.

Cite

CITATION STYLE

APA

Mosca, A., & Magoulas, G. D. (2017). Learning input features representations in deep learning. In Advances in Intelligent Systems and Computing (Vol. 513, pp. 433–445). Springer Verlag. https://doi.org/10.1007/978-3-319-46562-3_28

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free