The minimization of empirical risk through stochastic gradient descent with momentum algorithms

4Citations
Citations of this article
4Readers
Mendeley users who have this article in their library.
Get full text

Abstract

The learning problems are always affected with a certain amount of risk. This risk is measured empirically through various risk functions. The risk functional’s empirical estimates consist of an average over data points’ tuples. With this motivation in this work, the prima face is towards presenting any stochastic approximation method for solving problems involving minimization of risk. Considering huge datasets scenario, gradient estimates are achieved through taking samples of data points’ tuples with replacement. Based on this, a mathematical proposition is presented here which account towards considerable impact for this strategy on prediction model’s ability of generalization through stochastic gradient descent with momentum. The method reaches optimum trade-off with respect to accuracy and cost. The experimental results on maximization of area under the curve (AUC) and metric learning provides superior support towards this approach.

Cite

CITATION STYLE

APA

Chaudhuri, A. (2019). The minimization of empirical risk through stochastic gradient descent with momentum algorithms. In Advances in Intelligent Systems and Computing (Vol. 985, pp. 168–181). Springer Verlag. https://doi.org/10.1007/978-3-030-19810-7_17

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free