In this paper, within the framework of statistical learning theory we address the elastic-net regularization problem. Based on the capacity assumption of hypothesis space composed by infinite features, significant contributions are made in several aspects. First, concentration estimates for sample error are presented by introducing ℓ2-empirical covering number and utilizing an iteration process. Second, a constructive approximation approach for estimating approximation error is presented. Third, the elastic-net learning with infinite features is studied and the role that the tuning parameter ζ plays is also discussed. Finally, our learning rate is shown to be faster compared with existing results. © 2012 Elsevier Ltd.
Zhao, Y. L., & Feng, Y. L. (2013). Learning performance of elastic-net regularization. Mathematical and Computer Modelling, 57(5–6), 1395–1407. https://doi.org/10.1016/j.mcm.2012.11.028