Shared deep kernel learning for dimensionality reduction

0Citations
Citations of this article
5Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Deep Kernel Learning (DKL) has been proven to be an effective method to learn complex feature representation by combining the structural properties of deep learning with the nonparametric flexibility of kernel methods, which can be naturally used for supervised dimensionality reduction. However, if limited training data are available its performance could be compromised because parameters of the deep structure embedded into the model are large and difficult to be efficiently optimized. In order to address this issue, we propose the Shared Deep Kernel Learning model by combining DKL with shared Gaussian Process Latent Variable Model. The novel method could not only bring the improved performance without increasing model complexity but also learn the hierarchical features by sharing the deep kernel. The comparison with some supervised dimensionality reduction methods and deep learning approach verify the advantages of the proposed model.

Cite

CITATION STYLE

APA

Jiang, X., Gao, J., Liu, X., Cai, Z., Zhang, D., & Liu, Y. (2018). Shared deep kernel learning for dimensionality reduction. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 10939 LNAI, pp. 297–308). Springer Verlag. https://doi.org/10.1007/978-3-319-93040-4_24

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free