Background: This paper introduces the notion of optimizing different norms in the dual problem of support vector machines with multiple kernels. The selection of norms yields different extensions of multiple kernel learning (MKL) such as L∞, L1, and L2MKL. In particular, L2MKL is a novel method that leads to non-sparse optimal kernel coefficients, which is different from the sparse kernel coefficients optimized by the existing L∞MKL method. In real biomedical applications, L2MKL may have more advantages over sparse integration method for thoroughly combining complementary information in heterogeneous data sources.Results: We provide a theoretical analysis of the relationship between the L2optimization of kernels in the dual problem with the L2coefficient regularization in the primal problem. Understanding the dual L2problem grants a unified view on MKL and enables us to extend the L2method to a wide range of machine learning problems. We implement L2MKL for ranking and classification problems and compare its performance with the sparse L∞and the averaging L1MKL methods. The experiments are carried out on six real biomedical data sets and two large scale UCI data sets. L2MKL yields better performance on most of the benchmark data sets. In particular, we propose a novel L2MKL least squares support vector machine (LSSVM) algorithm, which is shown to be an efficient and promising classifier for large scale data sets processing.Conclusions: This paper extends the statistical framework of genomic data fusion based on MKL. Allowing non-sparse weights on the data sources is an attractive option in settings where we believe most data sources to be relevant to the problem at hand and want to avoid a winner-takes-all effect seen in L∞MKL, which can be detrimental to the performance in prospective studies. The notion of optimizing L2kernels can be straightforwardly extended to ranking, classification, regression, and clustering algorithms. To tackle the computational burden of MKL, this paper proposes several novel LSSVM based MKL algorithms. Systematic comparison on real data sets shows that LSSVM MKL has comparable performance as the conventional SVM MKL algorithms. Moreover, large scale numerical experiments indicate that when cast as semi-infinite programming, LSSVM MKL can be solved more efficiently than SVM MKL.Availability: The MATLAB code of algorithms implemented in this paper is downloadable from http://homes.esat.kuleuven.be/~sistawww/bioi/syu/l2lssvm.html. © 2010 Yu et al; licensee BioMed Central Ltd.
CITATION STYLE
Yu, S., Falck, T., Daemen, A., Tranchevent, L. C., Suykens, J. A. K., De Moor, B., & Moreau, Y. (2010). L2-norm multiple kernel learning and its application to biomedical data fusion. BMC Bioinformatics, 11. https://doi.org/10.1186/1471-2105-11-309
Mendeley helps you to discover research relevant for your work.