Learning performance of elastic-net regularization

2Citations
Citations of this article
5Readers
Mendeley users who have this article in their library.

Abstract

In this paper, within the framework of statistical learning theory we address the elastic-net regularization problem. Based on the capacity assumption of hypothesis space composed by infinite features, significant contributions are made in several aspects. First, concentration estimates for sample error are presented by introducing ℓ2-empirical covering number and utilizing an iteration process. Second, a constructive approximation approach for estimating approximation error is presented. Third, the elastic-net learning with infinite features is studied and the role that the tuning parameter ζ plays is also discussed. Finally, our learning rate is shown to be faster compared with existing results. © 2012 Elsevier Ltd.

Cite

CITATION STYLE

APA

Zhao, Y. L., & Feng, Y. L. (2013). Learning performance of elastic-net regularization. Mathematical and Computer Modelling, 57(5–6), 1395–1407. https://doi.org/10.1016/j.mcm.2012.11.028

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free