Sharper bounds for the hardness of prototype and feature selection

7Citations
Citations of this article
14Readers
Mendeley users who have this article in their library.
Get full text

Abstract

As pointed out by Blum [Blu94], “nearly all results in Machine Learning[…] deal with problems of separating relevant from irrelevant information in some way”. This paper is concerned with structural complexity issues regarding the selection of relevant Prototypes or Features. We give the first results proving that both problems can be much harder than expected in the literature for various notions of relevance. In particular, the worst-case bounds achievable by any efficient algorithm are proven to be very large, most of the time not so far from trivial bounds. We think these results give a theoretical justification for the numerous heuristic approaches found in the literature to cope with these problems.

Cite

CITATION STYLE

APA

Nock, R., & Sebban, M. (2000). Sharper bounds for the hardness of prototype and feature selection. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 1968, pp. 224–238). Springer Verlag. https://doi.org/10.1007/3-540-40992-0_17

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free