Simpler PAC-Bayesian bounds for hostile data

38Citations
Citations of this article
25Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

PAC-Bayesian learning bounds are of the utmost interest to the learning community. Their role is to connect the generalization ability of an aggregation distribution ρ to its empirical risk and to its Kullback-Leibler divergence with respect to some prior distribution π. Unfortunately, most of the available bounds typically rely on heavy assumptions such as boundedness and independence of the observations. This paper aims at relaxing these constraints and provides PAC-Bayesian learning bounds that hold for dependent, heavy-tailed observations (hereafter referred to as hostile data). In these bounds the Kullack-Leibler divergence is replaced with a general version of Csiszár’s f-divergence. We prove a general PAC-Bayesian bound, and show how to use it in various hostile settings.

Cite

CITATION STYLE

APA

Alquier, P., & Guedj, B. (2018). Simpler PAC-Bayesian bounds for hostile data. Machine Learning, 107(5), 887–902. https://doi.org/10.1007/s10994-017-5690-0

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free