Bounds on the generalization ability of Bayesian inference and Gibbs algorithms

0Citations
Citations of this article
3Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Recent theoretical works applying the methods of statistical learning theory have put into relief the interest of old well known learning paradigms such as Bayesian inference and Gibbs algorithms. Sample complexity bounds have been given for such paradigms in the zero error case. This paper studies the behavior of these algorithms without this assumption. Results include uniform convergence of Gibbs algorithm towards Bayesian inference, rate of convergence of the empirical loss towards the generalization loss, convergence of the generalization error towards the optimal loss in the underlying class of functions.

Cite

CITATION STYLE

APA

Teytaud, O., & Paugam-Moisy, H. (2001). Bounds on the generalization ability of Bayesian inference and Gibbs algorithms. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 2130, pp. 265–271). Springer Verlag. https://doi.org/10.1007/3-540-44668-0_38

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free