The No Free Lunch theorems are often used to argue that domain specific knowledge is required to design successful algorithms. We use algorithmic information theory to argue the case for a universal bias allowing an algorithm to succeed in all interesting problem domains. Additionally, we give a new algorithm for off-line classification, inspired by Solomonoff induction, with good performance on all structured (compressible) problems under reasonable assumptions. This includes a proof of the efficacy of the well-known heuristic of randomly selecting training data in the hope of reducing the misclassification rate. © 2013 Springer-Verlag Berlin Heidelberg.
CITATION STYLE
Lattimore, T., & Hutter, M. (2013). No free lunch versus Occam’s Razor in supervised learning. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 7070 LNAI, pp. 223–235). Springer Verlag. https://doi.org/10.1007/978-3-642-44958-1_17
Mendeley helps you to discover research relevant for your work.