Kernel matching pursuit (KMP), as a greedy machine learning algorithm, appends iteratively functions from a kernel-based dictionary to its solution. An obvious problem is that all kernel functions in dictionary will keep unchanged during the whole process of appending. It is difficult, however, to determine the optimal dictionary of kernel functions ahead of training, without enough prior knowledge. This paper proposes to further refine the results obtained by KMP, through adjusting all parameters simultaneously in the solutions. Three optimization methods including gradient descent (GD), simulated annealing (SA), and particle swarm optimization (PSO), are used to perform the refining procedure. Their performances are also analyzed and evaluated, according to experimental results based on UCI benchmark datasets. © 2010 Springer-Verlag.
CITATION STYLE
Li, J., & Lu, Y. (2010). Refining kernel matching pursuit. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 6064 LNCS, pp. 25–32). https://doi.org/10.1007/978-3-642-13318-3_4
Mendeley helps you to discover research relevant for your work.