Optimization of the SVM kernels using an empirical error minimization scheme

22Citations
Citations of this article
23Readers
Mendeley users who have this article in their library.
Get full text

Abstract

We address the problem of optimizing kernel parameters in Support Vector Machine modelling, especially when the number of parameters is greater than one as in polynomial kernels and KMOD, our newly introduced kernel. The present work is an extended experimental study of the framework proposed by Chapelle et al. for optimizing SVM kernels using an analytic upper bound of the error. However, our optimization scheme minimizes an empirical error estimate using a Quasi-Newton technique. The method has shown to reduce the number of support vectors along the optimization process. In order to assess our contribution, the approach is further used for adapting KMOD, RBF and polynomial kernels on synthetic data and NIST digit image database. The method has shown satisfactory results with much faster convergence in comparison with the simple gradient descent method. Furthermore, we also experimented two more optimization schemes based respectively on the maximization of the margin and on the minimization of an approximated VC dimension estimate. While both of the objective functions are minimized, the error is not. The corresponding experimental results we carried out show this shortcoming.

Cite

CITATION STYLE

APA

Ayat, N. E., Cheriet, M., & Suen, C. Y. (2002). Optimization of the SVM kernels using an empirical error minimization scheme. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 2388, pp. 354–369). Springer Verlag. https://doi.org/10.1007/3-540-45665-1_28

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free