Almost optimal estimates for approximation and learning by radial basis function networks

23Citations
Citations of this article
10Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

This paper quantifies the approximation capability of radial basis function networks (RBFNs) and their applications in machine learning theory. The target is to deduce almost optimal rates of approximation and learning by RBFNs. For approximation, we show that for large classes of functions, the convergence rate of approximation by RBFNs is not slower than that of multivariate algebraic polynomials. For learning, we prove that, using the classical empirical risk minimization, the RBFNs estimator can theoretically realize the almost optimal learning rate. The obtained results underlie the successful application of RBFNs in various machine learning problems. © 2013 The Author(s).

Cite

CITATION STYLE

APA

Lin, S., Liu, X., Rong, Y., & Xu, Z. (2014). Almost optimal estimates for approximation and learning by radial basis function networks. Machine Learning, 95(2), 147–164. https://doi.org/10.1007/s10994-013-5406-z

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free