Kernel Principal Component Analysis (KPCA) has proven to be a versatile tool for unsupervised learning, however at a high computational cost due to the dense expansions in terms of kernel functions. We overcome this problem by proposing a new class of feature extractors employing ℓ1 norms in coefficient space instead of the Reproducing Kernel Hilbert Space in which KPCA was originally formulated in. Moreover, the modified setting allows us to efficiently extract features which maximize criteria other than the variance in a way similar to projection pursuit.
CITATION STYLE
Smola, A. J., Mangasarian, O. L., & Schölkopf, B. (2002). Sparse Kernel Feature Analysis (pp. 167–178). https://doi.org/10.1007/978-3-642-55991-4_18
Mendeley helps you to discover research relevant for your work.