Abstract
A class of numerical measures of informativity of observation channels or statistical experiments is defined by the aid of f-divergences introduced by the author as measures of difference of two probability distributions. For observation channels with given prior probabilities, the f-informativity measures are generalizations of Shannon's mutual information and include Gallager's function E0(ρ{variant}Q) appearing in the derivation of error exponent for noisy channels, as well. For observation channels without prior probabilities, the suggested informativity measures have the geometric interpretation of a radius. The f-informativity defined for the Bayesian case shares several useful properties of the mutual information, such as e. g. the "data processing theorem". Its maximum with respect to all possible prior distributions is shown by a minimax argument to be just the f-radius, thus the latter is a generalization of channel capacity. The f-informativity measures can also be used to characterize the statistical sufficiency of indirect observations. © 1972 Akadémiai Kiadó.
Cite
CITATION STYLE
Csiszár, I. (1972). A class of measures of informativity of observation channels. Periodica Mathematica Hungarica, 2(1–4), 191–213. https://doi.org/10.1007/BF02018661
Register to see more suggestions
Mendeley helps you to discover research relevant for your work.