Reweighted l2-regularized dual averaging approach for highly sparse stochastic learning

2Citations
Citations of this article
2Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Recent advances in dual averaging schemes for primal-dual subgradient methods and stochastic learning revealed an ongoing and growing interest inmaking stochastic and online approaches consistent and tailored towards sparsity inducing norms. In this paper we focus on the reweighting scheme in the l2-Regularized Dual Averaging approach which favors properties of a strongly convex optimization objective while approximating in a limit the l0-type of penalty. In our analysis we focus on a regret and convergence criteria of such an approximation. We derive our results in terms of a sequence of strongly convex optimization objectives obtained via the smoothing of a sub-differential and non-smooth loss function, e.g. hinge loss. We report an empirical evaluation of the convergence in terms of the cumulative training error and the stability of the selected set of features. Experimental evaluation shows some improvements over the l1-RDA method in the generalization error as well.

Cite

CITATION STYLE

APA

Jumutc, V., & Suykens, J. A. K. (2014). Reweighted l2-regularized dual averaging approach for highly sparse stochastic learning. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 8866, pp. 232–242). Springer Verlag. https://doi.org/10.1007/978-3-319-12436-0_26

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free