Recent theoretical works applying the methods of statistical learning theory have put into relief the interest of old well known learning paradigms such as Bayesian inference and Gibbs algorithms. Sample complexity bounds have been given for such paradigms in the zero error case. This paper studies the behavior of these algorithms without this assumption. Results include uniform convergence of Gibbs algorithm towards Bayesian inference, rate of convergence of the empirical loss towards the generalization loss, convergence of the generalization error towards the optimal loss in the underlying class of functions.
CITATION STYLE
Teytaud, O., & Paugam-Moisy, H. (2001). Bounds on the generalization ability of Bayesian inference and Gibbs algorithms. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 2130, pp. 265–271). Springer Verlag. https://doi.org/10.1007/3-540-44668-0_38
Mendeley helps you to discover research relevant for your work.