SUPPORT VECTOR MACHINES AND RADON'S THEOREM

1Citations
Citations of this article
9Readers
Mendeley users who have this article in their library.

Abstract

A support vector machine (SVM) is an algorithm that finds a hyperplane which optimally separates labeled data points in Rn into positive and negative classes. The data points on the margin of this separating hyperplane are called support vectors. We connect the possible configurations of support vectors to Radon's theorem, which provides guarantees for when a set of points can be divided into two classes (positive and negative) whose convex hulls intersect. If the convex hulls of the positive and negative support vectors are projected onto a separating hyperplane, then the projections intersect if and only if the hyperplane is optimal. Further, with a particular type of general position, we show that (a) the projected convex hulls of the support vectors intersect in exactly one point, (b) the support vectors are stable under perturbation, (c) there are at most n + 1 support vectors, and (d) every number of support vectors from 2 up to n + 1 is possible. Finally, we perform computer simulations studying the expected number of support vectors, and their configurations, for randomly generated data. We observe that as the distance between classes of points increases for this type of randomly generated data, configurations with fewer support vectors become more likely.

Cite

CITATION STYLE

APA

Adams, H., Farnell, E., & Story, B. (2022). SUPPORT VECTOR MACHINES AND RADON’S THEOREM. Foundations of Data Science, 4(4), 467–494. https://doi.org/10.3934/fods.2022017

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free