Split Bregman method for large scale fused Lasso

  • Ye G
  • Xie X
  • 52


    Mendeley users who have this article in their library.
  • 35


    Citations of this article.


Ordering of regression or classification coefficients occurs in many real-world applications. Fused Lasso exploits this ordering by explicitly regularizing the differences between neighboring coefficients through an ℓ1 norm regularizer. However, due to nonseparability and nonsmoothness of the regularization term, solving the fused Lasso problem is computationally demanding. Existing solvers can only deal with problems of small or medium size, or a special case of the fused Lasso problem in which the predictor matrix is the identity matrix. In this paper, we propose an iterative algorithm based on the split Bregman method to solve a class of large-scale fused Lasso problems, including a generalized fused Lasso and a fused Lasso support vector classifier. We derive our algorithm using an augmented Lagrangian method and prove its convergence properties. The performance of our method is tested on both artificial data and real-world applications including proteomic data from mass spectrometry and genomic data from array comparative genomic hybridization (array CGH). We demonstrate that our method is many times faster than the existing solvers, and show that it is especially efficient for large p, small n problems, where p is the number of variables and n is the number of samples. © 2010 Published by Elsevier B.V.

Author-supplied keywords

  • Bregman iteration
  • Fused Lasso
  • Fused Lasso support vector classifier
  • ℓ1-norm

Get free article suggestions today

Mendeley saves you time finding and organizing research

Sign up here
Already have an account ?Sign in

Find this document

Get full text


  • Gui Bo Ye

  • Xiaohui Xie

Cite this document

Choose a citation style from the tabs below

Save time finding and organizing research with Mendeley

Sign up for free