Penalized composite quasi-likelihood for ultrahigh dimensional variable selection

117Citations
Citations of this article
50Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

In high dimensional model selection problems, penalized least square approaches have been extensively used. The paper addresses the question of both robustness and efficiency of penalized model selection methods and proposes a data-driven weighted linear combination of convex loss functions, together with weighted L1-penalty. It is completely data adaptive and does not require prior knowledge of the error distribution. The weighted L1-penalty is used both to ensure the convexity of the penalty term and to ameliorate the bias that is caused by the L1-penalty. In the setting with dimensionality much larger than the sample size, we establish a strong oracle property of the method proposed that has both the model selection consistency and estimation efficiency for the true non-zero coefficients. As specific examples, we introduce a robust method of composite L1-L2, and an optimal composite quantile method and evaluate their performance in both simulated and real data examples. © 2011 Royal Statistical Society.

Cite

CITATION STYLE

APA

Bradic, J., Fan, J., & Wang, W. (2011). Penalized composite quasi-likelihood for ultrahigh dimensional variable selection. Journal of the Royal Statistical Society. Series B: Statistical Methodology, 73(3), 325–349. https://doi.org/10.1111/j.1467-9868.2010.00764.x

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free