This paper quantifies the approximation capability of radial basis function networks (RBFNs) and their applications in machine learning theory. The target is to deduce almost optimal rates of approximation and learning by RBFNs. For approximation, we show that for large classes of functions, the convergence rate of approximation by RBFNs is not slower than that of multivariate algebraic polynomials. For learning, we prove that, using the classical empirical risk minimization, the RBFNs estimator can theoretically realize the almost optimal learning rate. The obtained results underlie the successful application of RBFNs in various machine learning problems. © 2013 The Author(s).
CITATION STYLE
Lin, S., Liu, X., Rong, Y., & Xu, Z. (2014). Almost optimal estimates for approximation and learning by radial basis function networks. Machine Learning, 95(2), 147–164. https://doi.org/10.1007/s10994-013-5406-z
Mendeley helps you to discover research relevant for your work.