Resilient approximation of kernel classifiers

4Citations
Citations of this article
4Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Trained support vector machines (SVMs) have a slow run-time classification speed if the classification problem is noisy and the sample data set is large. Approximating the SVM by a more sparse function has been proposed to solve to this problem. In this study, different variants of approximation algorithms are empirically compared. It is shown that gradient descent using the improved Rprop algorithm increases the robustness of the method compared to fixed-point iteration. Three different heuristics for selecting the support vectors to be used in the construction of the sparse approximation are proposed. It turns out that none is superior to random selection. The effect of a finishing gradient descent on all parameters of the sparse approximation is studied. © Springer-Verlag Berlin Heidelberg 2007.

Cite

CITATION STYLE

APA

Suttorp, T., & Igel, C. (2007). Resilient approximation of kernel classifiers. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 4668 LNCS, pp. 139–148). Springer Verlag. https://doi.org/10.1007/978-3-540-74690-4_15

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free