Exploring fringe settings of SVMs for classification

0Citations
Citations of this article
4Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

There are many practical applications where learning from single class examples is either, the only possible solution, or has a distinct performance advantage. The first case occurs when obtaining examples of a second class is difficult, e.g., classifying sites of "interest" based on web accesses. The second situation is exemplified by the one-class support vector machine which was the winning submission of the second task of the KDD Cup 2002. This paper explores the limits of supervised learning using both positive and negative examples. To this end, we analyse the KDD Cup dataset using four classifiers (support vector machines and ridge regression) and several feature selection methods. Our analysis shows that there is a consistent pattern of performance differences between one and two-class learning for all algorithms investigated, and these patterns persist even with aggressive dimensionality reduction through automated feature selection. Using insight gained from the above analysis, we generate synthetic data showing similar pattern of performance.

Cite

CITATION STYLE

APA

Kowalczyk, A., & Raskutti, B. (2003). Exploring fringe settings of SVMs for classification. In Lecture Notes in Artificial Intelligence (Subseries of Lecture Notes in Computer Science) (Vol. 2838, pp. 278–290). Springer Verlag. https://doi.org/10.1007/978-3-540-39804-2_26

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free