This paper deals with an unusual phenomenon where most machine learning algorithms yield good performance on the training set but systematically worse than random performance on the test set. This has been observed so far for some natural data sets and demonstrated for some synthetic data sets when the classification rule is learned from a small set of training samples drawn from some high dimensional space. The initial analysis presented in this paper shows that anti-learning is a property of data sets and is quite distinct from over-fitting of a training data. Moreover, the analysis leads to a specification of some machine learning procedures which can overcome anti-learning and generate machines able to classify training and test data consistently. © Springer-Verlag Berlin Heidelberg 2005.
CITATION STYLE
Kowalczyk, A., & Chapelle, O. (2005). An analysis of the anti-learning phenomenon for the class symmetric polyhedron. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 3734 LNAI, pp. 78–91). https://doi.org/10.1007/11564089_8
Mendeley helps you to discover research relevant for your work.