Strong entropy concentration, game theory, and algorithmic randomness

7Citations
Citations of this article
5Readers
Mendeley users who have this article in their library.
Get full text

Abstract

We give a characterization of Maximum Entropy/Minimum Relative Entropy inference by providing two ‘strong entropy concentration’ theorems. These theorems unify and generalize Jaynes’ ‘concentration phenomenon’ and Van Campenhout and Cover’s ‘conditional limit theorem’. The theorems characterize exactly in what sense a ‘prior’ distribution Q conditioned on a given constraint and the distribution (image found) minimizing D(P||Q) over all P satisfying the constraint are ‘close’ to each other. We show how our theorems are related to ‘universal models’ for exponential families, thereby establishing a link with Rissanen’s MDL/stochastic complexity. We then apply our theorems to establish the relationship (A) between entropy concentration and a game-theoretic characterization of Maximum Entropy Inference due to Topsøe and others; (B) between maximum entropy distributions and sequences that are random (in the sense of Martin-Löf/Kolmogorov) with respect to the given constraint. These two applications have strong implications for the use of Maximum Entropy distributions in sequential prediction tasks, both for the logarithmic loss and for general loss functions. We identify circumstances under which Maximum Entropy predictions are almost optimal.

Cite

CITATION STYLE

APA

Grünwald, P. (2001). Strong entropy concentration, game theory, and algorithmic randomness. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 2111, pp. 320–336). Springer Verlag. https://doi.org/10.1007/3-540-44581-1_21

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free