Penalized and constrained LAD estimation in fixed and high dimension

4Citations
Citations of this article
5Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Recently, many literatures have proved that prior information and structure in many application fields can be formulated as constraints on regression coefficients. Following these work, we propose a L1 penalized LAD estimation with some linear constraints in this paper. Different from constrained lasso, our estimation performs well when heavy-tailed errors or outliers are found in the response. In theory, we show that the proposed estimation enjoys the Oracle property with adjusted normal variance when the dimension of the estimated coefficients p is fixed. And when p is much greater than the sample size n, the error bound of proposed estimation is sharper than klog(p)/n. It is worth noting the result is true for a wide range of noise distribution, even for the Cauchy distribution. In algorithm, we not only consider an typical linear programming to solve proposed estimation in fixed dimension , but also present an nested alternating direction method of multipliers (ADMM) in high dimension. Simulation and application to real data also confirm that proposed estimation is an effective alternative when constrained lasso is unreliable.

Cite

CITATION STYLE

APA

Wu, X., Liang, R., & Yang, H. (2022). Penalized and constrained LAD estimation in fixed and high dimension. Statistical Papers, 63(1), 53–95. https://doi.org/10.1007/s00362-021-01229-0

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free