Although kernel machines allow a non-linear analysis through the transformation of their input data, their computational complexity makes them inefficient in terms of time and memory for the analysis of very large databases. Several attempts have been made to improve kernel methods performance, many of which are focused on approximate the kernel matrix or the feature mapping associated to it. Current trends in machine learning demands the capacity of dealing with large data sets while exploiting the capabilities of massively parallel architectures based on GPUs. This has been mainly accomplished by a combination of gradient descent optimization and online learning. This paper presents an online kernel-based model based on the dual formulation of Least Squared Support Vector Machine method, using the Learning on a Budget strategy to lighten the computational cost. This extends the algorithm capability to analyze very large or high-dimensional data without requiring high memory resources. The method was evaluated against other kernel approximation techniques: Nyström approximation and Random Fourier Features. Experiments made with different datasets show the effectiveness of the Learning on a Budget strategy compared with the other approximation techniques.
CITATION STYLE
Toledo-Cortés, S., Castellanos-Martinez, I. Y., & Gonzalez, F. A. (2019). Large scale learning techniques for least squares support vector machines. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 11401 LNCS, pp. 3–11). Springer Verlag. https://doi.org/10.1007/978-3-030-13469-3_1
Mendeley helps you to discover research relevant for your work.