In high dimensional setting, componentwise L2 boosting method has been used to construct sparse model of high prediction, but it tends to select many ineffective variables. Several sparse boosting methods, such as, SparseL2 Boosting and Twin Boosting, have been proposed to improve the variable selection of L2boosting algorithm. In this paper, we propose a new general sparse boosting method (GSBoosting). The relations are established between GSBoosting and other well known regularized variable selection methods in orthogonal linear model, such as adaptive Lasso, hard thresholds etc. Simulations results show that GS-Boosting has good performance in both prediction and variable selection. © Springer-Verlag 2012.
CITATION STYLE
Zhao, J. (2012). Sparse boosting with correlation based penalty. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 7713 LNAI, pp. 161–172). https://doi.org/10.1007/978-3-642-35527-1_14
Mendeley helps you to discover research relevant for your work.