In SVMs community, the learning results are always a combination of the selected functions. SVMs have two mainly regularization models to find the combination coefficients. The most popular model with m input samples is norm-regularized the classification function in a reproducing kernel Hilbert space(RKHS), and it is converted to an optimization problem in ℝm by duality or representer theorem. Another important model is generalized support vector machine(GSVM), in which the coefficients of the hypothesis is norm-regularized in the Euclidean space ℝm. In this work, we analyze the difference between them on computing stability, computational complexity and the efficiency of the Newton type algorithms, especially on the reduced SVMs for large scale training problems. Many typical loss functions are considered. Our studies show that the model of GSVM has more advantages than the other model. Some experiments are given to support our analysis.
CITATION STYLE
Zhou, S. (2013). Which is better? Regularization in RKHS vs ℝm on reduced SVMs. Statistics, Optimization and Information Computing, 1(1), 82–106. https://doi.org/10.19139/soic.v1i1.27
Mendeley helps you to discover research relevant for your work.