A note on support vector machine degeneracy

8Citations
Citations of this article
20Readers
Mendeley users who have this article in their library.
Get full text

Abstract

When training Support Vector Machines (SVMs) over nonseparable data sets, one sets the threshold b using any dual cost coefficient that is strictly between the bounds of 0 and C. We show that there exist SVM training problems with dual optimal solutions with all coefficients at bounds, but that all such problems are degenerate in the sense that the “optimal separating hyperplane” is given by w = 0, and the resulting (degenerate) SVM will classify all future points identically (to the class that supplies more training data). We also derive necessary and sufficient conditions on the input data for this to occur. Finally, we show that an SVM training problem can always be made degenerate by the addition of a single data point belonging to a certain unbounded polyhedron, which we characterize in terms of its extreme points and rays.

Cite

CITATION STYLE

APA

Rifkin, R., Pontil, M., & Verri, A. (1999). A note on support vector machine degeneracy. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 1720, pp. 252–263). Springer Verlag. https://doi.org/10.1007/3-540-46769-6_21

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free