Hyperparameter selection in kernel principal component analysis

28Citations
Citations of this article
33Readers
Mendeley users who have this article in their library.

Abstract

In kernel methods, choosing a suitable kernel is indispensable for favorable results. No well-founded methods, however, have been established in general for unsupervised learning. We focus on kernel Principal Component Analysis (kernel PCA), which is a nonlinear extension of principal component analysis and has been used electively for extracting nonlinear features and reducing dimensionality. As a kernel method, kernel PCA also suffers from the problem of kernel choice. Although cross-validation is a popular method for choosing hyperparameters, it is not applicable straightforwardly to choose a kernel in kernel PCA because of the incomparable norms given by different kernels. It is important, thus, to develop a wellfounded method for choosing a kernel in kernel PCA. This study proposes a method for choosing hyperparameters in kernel PCA (kernel and the number of components) based on cross-validation for the comparable reconstruction errors of pre-images in the original space. The experimental results on synthesized and real-world datasets demonstrate that the proposed method successfully selects an appropriate kernel and the number of components in kernel PCA in terms of visualization and classification errors on the principal components. The results imply that the proposed method enables automatic design of hyperparameters in kernel PCA. © 2014 Science Publications.

Cite

CITATION STYLE

APA

Alam, M. A., & Fukumizu, K. (2014). Hyperparameter selection in kernel principal component analysis. Journal of Computer Science, 10(7), 1139–1150. https://doi.org/10.3844/jcssp.2014.1139.1150

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free