Efficient large scale linear programming support vector machines

9Citations
Citations of this article
18Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

This paper presents a decomposition method for efficiently constructing ℓ1-norm Support Vector Machines (SVMs). The decomposition algorithm introduced in this paper possesses many desirable properties. For example, it is provably convergent, scales well to large datasets, is easy to implement, and can be extended to handle support vector regression and other SVM variants. We demonstrate the efficiency of our algorithm by training on (dense) synthetic datasets of sizes up to 20 million points (in ℝ32). The results show our algorithm to be several orders of magnitude faster than a previously published method for the same task. We also present experimental results on real data sets - our method is seen to be not only very fast, but also highly competitive against the leading SVM implementations. © Springer-Verlag Berlin Heidelberg 2006.

Cite

CITATION STYLE

APA

Sra, S. (2006). Efficient large scale linear programming support vector machines. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 4212 LNAI, pp. 767–774). Springer Verlag. https://doi.org/10.1007/11871842_78

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free