A double gradient algorithm to optimize regularization

2Citations
Citations of this article
3Readers
Mendeley users who have this article in their library.
Get full text

Abstract

We present in this article a new technique dedicated to optimise the regularization parameter of a cost function. On the one hand the derivatives of the cost function with regards to the weights permits to optimise the network. On the other the derivatives of the cost function with regards to the regularization parameter permits to optimize the smoothness of the function achieved by the network. We show that by oscillating between these two gradient descent optimisations we achieve the task of regulating the smoothness of a neural network. We present the results of this algorithm on a task design to clearly express the network's level of smoothness.

Cite

CITATION STYLE

APA

Czernichow, T. (1997). A double gradient algorithm to optimize regularization. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 1327, pp. 289–294). Springer Verlag. https://doi.org/10.1007/bfb0020169

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free