Predictive complexity and generalized entropy rate of stationary ergodic processes

2Citations
Citations of this article
2Readers
Mendeley users who have this article in their library.
Get full text

Abstract

In the online prediction framework, we use generalized entropy to study the loss rate of predictors when outcomes are drawn according to stationary ergodic distributions over the binary alphabet. We show that the notion of generalized entropy of a regular game [11] is well-defined for stationary ergodic distributions. In proving this, we obtain new game-theoretic proofs of some classical information theoretic inequalities. Using Birkhoff's ergodic theorem and convergence properties of conditional distributions, we prove that a generalization of the classical Shannon-McMillan-Breiman theorem holds for a restricted class of regular games, when no computational constraints are imposed on the prediction strategies. If a game is mixable, then there is an optimal aggregating strategy which loses at most an additive constant when compared to any other lower semicomputable strategy. The loss incurred by this algorithm on an infinite sequence of outcomes is called its predictive complexity. We prove that when a restricted regular game has a predictive complexity, the average predictive complexity converges to the generalized entropy of the game almost everywhere with respect to the stationary ergodic distribution. © 2012 Springer-Verlag.

Cite

CITATION STYLE

APA

Ghosh, M., & Nandakumar, S. (2012). Predictive complexity and generalized entropy rate of stationary ergodic processes. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 7568 LNAI, pp. 365–379). https://doi.org/10.1007/978-3-642-34106-9_29

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free