We examine the possibility of justifying the principle of maximum relative entropy (MRE) considered as an updating rule by looking at the value of learning theorem established in classical decision theory. This theorem captures an intuitive requirement for learning: Learning should lead to new degrees of belief that are expected to be helpful and never harmful in making decisions. We call this requirement the value of learning. We consider the extent to which learning rules by MRE could satisfy this requirement and so could be a rational means for pursuing practical goals. First, by representing MRE updating as a conditioning model, we show that MRE satisfies the value of learning in cases where learning prompts a complete redistribution of one's degrees of belief over a partition of propositions. Second, we show that the value of learning may not be generally satisfied by MRE updates in cases of updating on a change in one's conditional degrees of belief. We explain that this is so because, contrary to what the value of learning requires, one's prior degrees of belief might not be equal to the expectation of one's posterior degrees of belief. This, in turn, points towards a more general moral: That the justification of MRE updating in terms of the value of learning may be sensitive to the context of a given learning experience. Moreover, this lends support to the idea that MRE is not a universal nor mechanical updating rule, but rather a rule whose application and justification may be context-sensitive.
CITATION STYLE
Dziurosz-Serafinowicz, P. (2015). Maximum relative entropy updating and the value of learning. Entropy, 17(3), 1146–1164. https://doi.org/10.3390/e17031146
Mendeley helps you to discover research relevant for your work.