Bayesian inference featuring entropic priors

9Citations
Citations of this article
11Readers
Mendeley users who have this article in their library.
Get full text

Abstract

The subject of this work is the parametric inference problem, i.e. how to infer from data on the parameters of the data likelihood of a random process whose parametric form is known a priori. The assumption that Bayes' theorem has to be used to add new data samples reduces the problem to the question of how to specify a prior before having seen any data. For this subproblem three theorems are stated. The first one is that Jaynes' Maximum Entropy Principle requires at least a constraint on the expected data likelihood entropy, which gives entropic priors without the need of further axioms. Second I show that maximizing Shannon entropy under an expected data likelihood entropy constraint is equivalent to maximizing relative entropy and therefore reparametrization invariant for continuous-valued data likelihoods. Third, I propose that in the state of absolute ignorance of the data likelihood entropy, one should choose the hyperparameter α of an entropic prior such that the change of expected data likelihood entropy is maximized. Among other beautiful properties, this principle is equivalent to the maximization of the mean-squared entropy error and invariant against any reparametrizations of the data likelihood. Altogether we get a Bayesian inference procedure that incorporates special prior knowledge if available but has also a sound solution if not, and leaves no hyperparameters unspecified. © 2007 American Institute of Physics.

Cite

CITATION STYLE

APA

Neumann, T. (2007). Bayesian inference featuring entropic priors. In AIP Conference Proceedings (Vol. 954, pp. 283–292). https://doi.org/10.1063/1.2821274

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free