Predictive complexity and information

2Citations
Citations of this article
4Readers
Mendeley users who have this article in their library.
Get full text

Abstract

A new notion of predictive complexity and corresponding amount of information are considered. Predictive complexity is a generalization of Kolmogorov complexity which bounds the ability of any algorithm to predict elements of a sequence of outcomes. We consider predictive complexity for a wide class of bounded loss functions which are generalizations of square-loss function. Relations between unconditional KG(x) and conditional KG(x|y) predictive complexities are studied. We define an algorithm which has some "expanding property". It transforms with positive probability sequences of given predictive complexity into sequences of essentially bigger predictive complexity. A concept of amount of predictive information IG(y : x) is studied. We show that this information is non-commutative in a very strong sense and present asymptotic relations between values IG(y : x), IG(x : y), KG(x) and KG(y).

Cite

CITATION STYLE

APA

Vyugin, M. V., & V’yugin, V. V. (2002). Predictive complexity and information. In Lecture Notes in Artificial Intelligence (Subseries of Lecture Notes in Computer Science) (Vol. 2375, pp. 90–105). Springer Verlag. https://doi.org/10.1007/3-540-45435-7_7

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free