An l1-oracle inequality for the lasso in finite mixture gaussian regression models

6Citations
Citations of this article
6Readers
Mendeley users who have this article in their library.

Abstract

We consider a finite mixture of Gaussian regression models for high-dimensional heterogeneous data where the number of covariates may be much larger than the sample size. We propose to estimate the unknown conditional mixture density by an l1-penalized maximum likelihood estimator. We shall provide an l1-oracle inequality satisfied by this Lasso estimator with the Kullback–Leibler loss. In particular, we give a condition on the regularization parameter of the Lasso to obtain such an oracle inequality. Our aim is twofold: to extend the l1-oracle inequality established by Massart and Meynet [12] in the homogeneous Gaussian linear regression case, and to present a complementary result to Städler et al. [18], by studying the Lasso for its l1-regularization properties rather than considering it as a variable selection procedure. Our oracle inequality shall be deduced from a finite mixture Gaussian regression model selection theorem for l1-penalized maximum likelihood conditional density estimation, which is inspired from Vapnik’s method of structural risk minimization [23] and from the theory on model selection for maximum likelihood estimators developed by Massart in [11]. © EDP Sciences, SMAI 2013.

Cite

CITATION STYLE

APA

Meynet, C. (2013). An l1-oracle inequality for the lasso in finite mixture gaussian regression models. ESAIM - Probability and Statistics, 17, 650–671. https://doi.org/10.1051/ps/2012016

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free