Improving deep neural network performance by reusing features trained with transductive transference

N/ACitations
Citations of this article
37Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Transfer Learning is a paradigm in machine learning to solve a target problem by reusing the learning with minor modifications from a different but related source problem. In this paper we propose a novel feature transference approach, especially when the source and the target problems are drawn from different distributions. We use deep neural networks to transfer either low or middle or higher-layer features for a machine trained in either unsupervised or supervised way. Applying this feature transference approach on Convolutional Neural Network and Stacked Denoising Autoencoder on four different datasets, we achieve lower classification error rate with significant reduction in computation time with lower-layer features trained in supervised way and higher-layer features trained in unsupervised way for classifying images of uppercase and lowercase letters dataset. © 2014 Springer International Publishing Switzerland.

Cite

CITATION STYLE

APA

Kandaswamy, C., Silva, L. M., Alexandre, L. A., Santos, J. M., & De Sá, J. M. (2014). Improving deep neural network performance by reusing features trained with transductive transference. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 8681 LNCS, pp. 265–272). Springer Verlag. https://doi.org/10.1007/978-3-319-11179-7_34

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free