Sublinear algorithms for penalized logistic regression in massive datasets

4Citations
Citations of this article
11Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

Penalized logistic regression (PLR) is a widely used supervised learning model. In this paper, we consider its applications in large-scale data problems and resort to a stochastic primal-dual approach for solving PLR. In particular, we employ a random sampling technique in the primal step and a multiplicative weights method in the dual step. This technique leads to an optimization method with sublinear dependency on both the volume and dimensionality of training data. We develop concrete algorithms for PLR with ℓ 2-norm and ℓ 1-norm penalties, respectively. Experimental results over several large-scale and high-dimensional datasets demonstrate both efficiency and accuracy of our algorithms. © 2012 Springer-Verlag.

Cite

CITATION STYLE

APA

Peng, H., Wang, Z., Chang, E. Y., Zhou, S., & Zhang, Z. (2012). Sublinear algorithms for penalized logistic regression in massive datasets. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 7523 LNAI, pp. 553–568). https://doi.org/10.1007/978-3-642-33460-3_41

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free