Uncertainty quantification in lasso-type regularization problems

1Citations
Citations of this article
3Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Regularization techniques, which sit at the interface of statistical modeling and machine learning, are often used in the engineering or other applied sciences to tackle high dimensional regression (type) problems. While a number of regularization methods are commonly used, the 'Least Absolute Shrinkage and Selection Operator' or simply LASSO is popular because of its efficient variable selection property. This property of the LASSO helps to deal with problems where the number of predictors is larger than the total number of observations, as it shrinks the coefficients of non-important parameters to zero. In this chapter, both frequentist and Bayesian approaches for the LASSO are discussed, with particular attention to the problem of uncertainty quantification of regression parameters. For the frequentist approach, we discuss a refit technique as well as the classical bootstrap method, and for the Bayesian method, we make use of the equivalent LASSO formulation using a Laplace prior on the model parameters.

Cite

CITATION STYLE

APA

Basu, T., Einbeck, J., & Troffaes, M. C. M. (2021). Uncertainty quantification in lasso-type regularization problems. In Optimization Under Uncertainty with Applications to Aerospace Engineering (pp. 81–109). Springer International Publishing. https://doi.org/10.1007/978-3-030-60166-9_3

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free