Multiplicative updates for L1-regularized linear and logistic regression

6Citations
Citations of this article
20Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Multiplicative update rules have proven useful in many areas of machine learning. Simple to implement, guaranteed to converge, they account in part for the widespread popularity of algorithms such as nonnegative matrix factorization and Expectation-Maximization. In this paper, we show how to derive multiplicative updates for problems in L1-regularized linear and logistic regression. For L1-regularized linear regression, the updates are derived by reformulating the required optimization as a problem in nonnegative quadratic programming (NQP). The dual of this problem, itself an instance of NQP, can also be solved using multiplicative updates; moreover, the observed duality gap can be used to bound the error of intermediate solutions. For L1-regularized logistic regression, we derive similar updates using an iteratively reweighted least squares approach. We present illustrative experimental results and describe efficient implementations for large-scale problems of interest (e.g., with tens of thousands of examples and over one million features). © Springer-Verlag Berlin Heidelberg 2007.

Cite

CITATION STYLE

APA

Sha, F., Park, Y. A., & Saul, L. K. (2007). Multiplicative updates for L1-regularized linear and logistic regression. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 4723 LNCS, pp. 13–24). Springer Verlag. https://doi.org/10.1007/978-3-540-74825-0_2

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free