Pathwise coordinate optimization

by Jerome Friedman, Trevor Hastie, Holger Höfling, Robert Tibshirani
The Annals of Applied Statistics ()
Get full text at journal

Abstract

We consider “one-at-a-time” coordinate-wise descent algorithms for a class of convex optimization problems. An algorithm of this kind has been proposed for the L1-penalized regression (lasso) in the literature, but it seems to have been largely ignored. Indeed, it seems that coordinate-wise algorithms are not often used in convex optimization. We show that this algorithm is very competitive with the well-known LARS (or homotopy) procedure in large lasso problems, and that it can be applied to related methods such as the garotte and elastic net. It turns out that coordinate-wise descent does not work in the “fused lasso,” however, so we derive a generalized algorithm that yields the solution in much less time that a standard convex optimizer. Finally, we generalize the procedure to the two-dimensional fused lasso, and demonstrate its performance on some image smoothing problems.

Cite this document (BETA)

Readership Statistics

419 Readers on Mendeley
by Discipline
 
35% Computer and Information Science
 
23% Mathematics
 
12% Biological Sciences
by Academic Status
 
44% Ph.D. Student
 
10% Post Doc
 
9% Student (Master)
by Country
 
9% United States
 
2% China
 
1% Germany

Sign up today - FREE

Mendeley saves you time finding and organizing research. Learn more

  • All your research in one place
  • Add and import papers easily
  • Access it anywhere, anytime

Start using Mendeley in seconds!

Sign up & Download

Already have an account? Sign in