A Compression Approach to Support Vector Model Selection

  • Luxburg Scholkopf B
N/ACitations
Citations of this article
14Readers
Mendeley users who have this article in their library.
Get full text

Abstract

In this paper we investigate connections between statistical learning<br />theory and data compression on the basis of support vector machine<br />(SVM) model selection. Inspired by several generalization bounds we<br />construct ``compression coefficients'' for SVMs, which measure the<br />amount by which the training labels can be compressed by some<br />classification hypothesis. The main idea is to relate the coding<br />precision of this hypothesis to the width of the margin of the<br />SVM. The compression coefficients connect well known quantities such<br />as the radius-margin ratio R\verb=^=2/rho\verb=^=2, the eigenvalues of the kernel<br />matrix and the number of support vectors. To test whether they are<br />useful in practice we ran model selection experiments on several real<br />world datasets. As a result we found that compression coefficients can<br />fairly accurately predict the parameters for which the test error is<br />minimized.

Cite

CITATION STYLE

APA

Luxburg  Scholkopf, B. (2004). A Compression Approach to Support Vector Model Selection. Journal of Machine Learning Research, 5, 293–323. https://doi.org/10.1080/1369118X.2012.678878

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free