Convergence of batch gradient learning with smoothing regularization and adaptive momentum for neural networks

22Citations
Citations of this article
10Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

This paper presents new theoretical results on the backpropagation algorithm with smoothing (Formula presented.) regularization and adaptive momentum for feedforward neural networks with a single hidden layer, i.e., we show that the gradient of error function goes to zero and the weight sequence goes to a fixed point as n (n is iteration steps) tends to infinity, respectively. Also, our results are more general since we do not require the error function to be quadratic or uniformly convex, and neuronal activation functions are relaxed. Moreover, compared with existed algorithms, our novel algorithm can get more sparse network structure, namely it forces weights to become smaller during the training and can eventually removed after the training, which means that it can simply the network structure and lower operation time. Finally, two numerical experiments are presented to show the characteristics of the main results in detail.

Cite

CITATION STYLE

APA

Fan, Q., Wu, W., & Zurada, J. M. (2016). Convergence of batch gradient learning with smoothing regularization and adaptive momentum for neural networks. SpringerPlus, 5(1). https://doi.org/10.1186/s40064-016-1931-0

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free