Robust algorithms via PAC-bayes and laplace distributions

0Citations
Citations of this article
5Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Laplace random variables are commonly used to model extreme noise in many fields, while systems trained to deal with such noise are often characterized by robustness properties. We introduce new learning algorithms that minimize objectives derived directly from PAC-Bayes generalization bounds, incorporating Laplace distributions. The resulting algorithms are regulated by theHuber loss functionwhich is considered relatively robust to large noise.We analyze the convexity properties of the objective, and propose a few bounds which are fully convex, two of which are jointly convex in themean and standard deviation under certain conditions.We derive new algorithms analogous to recent boosting algorithms, providing novel relations between boosting and PAC-Bayes analysis. Experiments show that our algorithms outperform AdaBoost (Freund and Schapire, A decision-theoretic generalization of on-line learning and an application to boosting, 1995), L1-LogBoost (Duchi and Singer, Boostingwith structural sparsity, 2009), and RobustBoost (Freund, A more robust boosting algorithm, 2009) in a wide range of noise.

Cite

CITATION STYLE

APA

Noy, A., & Crammer, K. (2015). Robust algorithms via PAC-bayes and laplace distributions. In Measures of Complexity: Festschrift for Alexey Chervonenkis (pp. 371–394). Springer International Publishing. https://doi.org/10.1007/978-3-319-21852-6_25

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free