Sharper lower bounds on the performance of the empirical risk minimization algorithm

9Citations
Citations of this article
8Readers
Mendeley users who have this article in their library.

Abstract

We present an argument based on the multidimensional and the uniform central limit theorems, proving that, under some geometrical assumptions between the target function T and the learning class F, the excess risk of the empirical risk minimization algorithm is lower bounded by Esup q∈Q Gq/δ,/n where (Gq)q∈Q is a canonical Gaussian process associated with Q (a well chosen subset of F) and δ is a parameter governing the oscillations of the empirical excess risk function over a small ball in F. © 2010 ISI/BS.

Cite

CITATION STYLE

APA

Lecué, G., & Mendelson, S. (2010). Sharper lower bounds on the performance of the empirical risk minimization algorithm. Bernoulli, 16(3), 605–613. https://doi.org/10.3150/09-BEJ225

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free