A syntactic approach to prediction

1Citations
Citations of this article
15Readers
Mendeley users who have this article in their library.
Get full text

Abstract

A central question in the empirical sciences is; given a body of data how do we best attempt to make predictions? There are subtle differences between current approaches which include Minimum Message Length (MML) and Solomonoff's theory of induction [24]. The nature of hypothesis spaces is explored and we observe a correlation between the complexity of a function and the frequency with which it is represented. There is not a single best hypothesis, as suggested by Occam's razor (which says prefer the simplest), but a set of functionally equivalent hypotheses. One set of hypotheses is preferred over another set because it is larger, thus giving the impression simpler functions generalize better. The probabilistic weighting of one set of hypotheses is given by the relative size of its equivalence class. We justify Occam's razor by a counting argument over the hypothesis space. Occam's razor contrasts with the No Free Lunch theorems which state that it impossible for one machine learning algorithm to generalize better than any other. No Free Lunch theorems assume a distribution over functions, whereas Occam's razor assumes a distribution over programs. © 2013 Springer-Verlag Berlin Heidelberg.

Cite

CITATION STYLE

APA

Woodward, J., & Swan, J. (2013). A syntactic approach to prediction. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 7070 LNAI, pp. 426–438). Springer Verlag. https://doi.org/10.1007/978-3-642-44958-1_34

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free