A greedy training algorithm for sparse least-squares support vector machines

16Citations
Citations of this article
3Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Suykens et al. [1] describes a form of kernel ridge regression known as the least-squares support vector machine (LS-SVM). In this paper, we present a simple, but efficient, greedy algorithm for constructing near optimal sparse approximations of least-squares support vector machines, in which at each iteration the training pattern minimising the regularised empirical risk is introduced into the kernel expansion. The proposed method demonstrates superior performance when compared with the pruning technique described by Suykens et al. [1], over the motorcycle and Boston housing datasets. © Springer-Verlag Berlin Heidelberg 2002.

Cite

CITATION STYLE

APA

Cawley, G. C., & Talbot, N. L. C. (2002). A greedy training algorithm for sparse least-squares support vector machines. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 2415 LNCS, pp. 681–686). Springer Verlag. https://doi.org/10.1007/3-540-46084-5_111

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free