Efficient optimization of the parameters of LS-SVM for regression versus cross-validation error

8Citations
Citations of this article
13Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Least Squares Support Vector Machines (LS-SVM) are the state of the art in kernel methods for regression and function approximation. In the last few years, these models have been successfully applied to time series modelling and prediction. A key issue for the good performance of a LS-SVM model are the values chosen for both the kernel parameters and its hyperparameters in order to avoid overfitting the underlying system to be modelled. In this paper an efficient method for the evaluation of the cross validation error for LS-SVM is revised. The expressions for its partial derivatives are presented in order to improve the procedure for parameter optimization. Some initial guesses to set the values of both kernel parameters and the regularization factor are also presented. We finally conduct some experiments on a time series data example using a number of methods for parameter optimization for LS-SVM models. The results show that the proposed partial derivatives and heuristics can improve the performance with respect to both execution time and the optimized model obtained. © 2009 Springer Berlin Heidelberg.

Cite

CITATION STYLE

APA

Rubio, G., Pomares, H., Rojas, I., Herrera, L. J., & Guillén, A. (2009). Efficient optimization of the parameters of LS-SVM for regression versus cross-validation error. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 5769 LNCS, pp. 406–415). https://doi.org/10.1007/978-3-642-04277-5_41

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free