A novel and efficient method for testing non linear separability

1Citations
Citations of this article
2Readers
Mendeley users who have this article in their library.
Get full text

Abstract

The notion of linear separability is widely used in machine learning research. Learning algorithms that use this concept to learn include neural networks (Single Layer Perceptron and Recursive Deterministic Perceptron), and kernel machines (Support Vector Machines). Several algorithms for testing linear separability exist. Some of these methods are computationally intense. Also, several of them will converge if the classes are linearly separable, but will fail to converge otherwise. A fast and efficient test for non linear separability is proposed which can be used to pretest classification data sets for non linear separability thus avoiding expensive computations. This test is based on the convex hull separability method but does not require the computation of the convex hull. © Springer-Verlag Berlin Heidelberg 2007.

Cite

CITATION STYLE

APA

Elizondo, D., Ortiz-de-Lazcano-Lobato, J. M., & Birkenhead, R. (2007). A novel and efficient method for testing non linear separability. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 4668 LNCS, pp. 737–746). Springer Verlag. https://doi.org/10.1007/978-3-540-74690-4_75

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free