Learning theory estimates via integral operators and their approximations

436Citations
Citations of this article
81Readers
Mendeley users who have this article in their library.
Get full text

Abstract

The regression problem in learning theory is investigated with least square Tikhonov regularization schemes in reproducing kernel Hilbert spaces (RKHS). We follow our previous work and apply the sampling operator to the error analysis in both the RKHS norm and the L2 norm. The tool for estimating the sample error is a Bennet inequality for random variables with values in Hilbert spaces. By taking the Hilbert space to be the one consisting of Hilbert-Schmidt operators in the RKHS, we improve the error bounds in the L2 metric, motivated by an idea of Caponnetto and de Vito. The error bounds we derive in the RKHS norm, together with a Tsybakov function we discuss here, yield interesting applications to the error analysis of the (binary) classification problem, since the RKHS metric controls the one for the uniform convergence. © 2007 Springer.

Cite

CITATION STYLE

APA

Smale, S., & Zhou, D. X. (2007). Learning theory estimates via integral operators and their approximations. Constructive Approximation, 26(2), 153–172. https://doi.org/10.1007/s00365-006-0659-y

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free