Robust Kernel Principal Component Analysis with ℓ2,1-Regularized Loss Minimization

6Citations
Citations of this article
7Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

Principal component analysis (PCA) is a widely used unsupervised method for dimensionality reduction. The kernelized version is called kernel principal component analysis (KPCA), which can capture the nonlinear data structure. KPCA is derived from the Gram matrix, which is not robust when outliers exist in the data. This may yield the principal axis in the feature space deviated by outliers, leading to misinterpretation of the principal components. In this paper, we propose a robust method for KPCA with a reformulation in Euclidean space to construct a robust KPCA method, where an error measurement is introduced into the loss function, and ℓ2,1-regularization is added to the loss function. The idea of ℓ2,1-regularization of the proposed method is motivated by sparse PCA via variable projection. However, because orthogonality is not satisfied in the proposed method, orthonormal bases are obtained by using the Gram-Schmidt orthonormalization process. In the experiments, a toy example and real data are used for outlier detection to verify the method's performance and effectiveness. In the toy example, the proposed method reduces the influence of outliers and detects more outliers than KPCA. For the real data, the proposed method improves detection in comparison to other existing methods.

Cite

CITATION STYLE

APA

Wang, D., & Tanaka, T. (2020). Robust Kernel Principal Component Analysis with ℓ2,1-Regularized Loss Minimization. IEEE Access, 8, 81864–81875. https://doi.org/10.1109/ACCESS.2020.2990493

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free