Should penalized least squares regression be interpreted as maximum a posteriori estimation?

  • Gribonval R
  • 46

    Readers

    Mendeley users who have this article in their library.
  • 28

    Citations

    Citations of this article.

Abstract

Penalized least squares regression is often used for signal denoising and inverse problems, and is commonly interpreted in a Bayesian framework as a Maximum a posteriori (MAP) estimator, the penalty function being the negative logarithm of the prior. For example, the widely used quadratic program (with an $\ell^{1}$ penalty) associated to the LASSO/basis pursuit denoising is very often considered as MAP estimation under a Laplacian prior in the context of additive white Gaussian noise (AWGN) reduction. This paper highlights the fact that, while this is one possible Bayesian interpretation, there can be other equally acceptable Bayesian interpretations. Therefore, solving a penalized least squares regression problem with penalty $\phi(x)$ need not be interpreted as assuming a prior $C\cdot\exp(-\phi(x))$ and using the MAP estimator. In particular, it is shown that for any prior $P_{X}$, the minimum mean-square error (MMSE) estimator is the solution of a penalized least square problem with some penalty $\phi(x)$ , which can be interpreted as the MAP estimator with the prior $C\cdot\exp(-\phi(x))$. Vice versa, for certain penalties $\phi(x)$, the solution of the penalized least squares problem is indeed the MMSE estimator, with a certain prior $P_{X}$ . In general $dP_{X}(x)
eq C\cdot\exp(-\phi(x))dx$.

Author-supplied keywords

  • Bayesian methods
  • maximum a posteriori estimation
  • mean-square error methods
  • signal denoising

Get free article suggestions today

Mendeley saves you time finding and organizing research

Sign up here
Already have an account ?Sign in

Find this document

Authors

  • Rémi Gribonval

Cite this document

Choose a citation style from the tabs below

Save time finding and organizing research with Mendeley

Sign up for free