Standard Bayesian inference can behave suboptimally if the model is wrong. We present a modification of Bayesian inference which continues to achieve good rates with wrong models. Our method adapts the Bayesian learning rate to the data, picking the rate minimizing the cumulative loss of sequential prediction by posterior randomization. Our results can also be used to adapt the learning rate in a PAC-Bayesian context. The results are based on an extension of an inequality due to T. Zhang and others to dependent random variables. © 2012 Springer-Verlag.
CITATION STYLE
Grünwald, P. (2012). The safe Bayesian: Learning the learning rate via the mixability gap. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 7568 LNAI, pp. 169–183). https://doi.org/10.1007/978-3-642-34106-9_16
Mendeley helps you to discover research relevant for your work.