Reliability of cross-validation for SVMs in high-dimensional, low sample size scenarios

18Citations
Citations of this article
40Readers
Mendeley users who have this article in their library.
Get full text

Abstract

A Support-Vector-Machine (SVM) learns for given 2-class-data a classifier that tries to achieve good generalisation by maximising the minimal margin between the two classes. The performance can be evaluated using cross-validation testing strategies. But in case of low sample size data, high dimensionality might lead to strong side-effects that can significantly bias the estimated performance of the classifier. On simulated data, we illustrate the effects of high dimensionality for cross-validation of both hard- and soft-margin SVMs. Based on the theoretical proofs towards infinity we derive heuristics that can be easily used to validate whether or not given data sets are subject to these constraints. © Springer-Verlag Berlin Heidelberg 2008.

Cite

CITATION STYLE

APA

Klement, S., Madany Mamlouk, A., & Martinetz, T. (2008). Reliability of cross-validation for SVMs in high-dimensional, low sample size scenarios. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 5163 LNCS, pp. 41–50). https://doi.org/10.1007/978-3-540-87536-9_5

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free