Deep Classifier Structures with Autoencoder for Higher-level Feature Extraction

0Citations
Citations of this article
5Readers
Mendeley users who have this article in their library.
Get full text

Abstract

This paper investigates deep classifier structures with stacked autoencoder (SAE) for higher-level feature extraction, aiming to overcome difficulties in training deep neural networks with limited training data in highdimensional feature space, such as overfitting and vanishing/exploding gradients. A three-stage learning algorithm is proposed in this paper for training deep multilayer perceptron (DMLP) as the classifier. At the first stage, unsupervised learning is adopted using SAE to obtain the initial weights of the feature extraction layers of the DMLP. At the second stage, error back-propagation is used to train the DMLP by fixing the weights obtained at the first stage for its feature extraction layers. At the third stage, all the weights of the DMLP obtained at the second stage are refined by error back-propagation. Cross-validation is adopted to determine the network structures and the values of the learning parameters, and test datasets unseen in the cross-validation are used to evaluate the performance of the DMLP trained using the three-stage learning algorithm, in comparison with support vector machines (SVM) combined with SAE. Experimental results have demonstrated the advantages and effectiveness of the proposed method.

Cite

CITATION STYLE

APA

Almulla Khalaf, M. I. A., & Gan, J. Q. (2018). Deep Classifier Structures with Autoencoder for Higher-level Feature Extraction. In International Joint Conference on Computational Intelligence (Vol. 1, pp. 31–38). Science and Technology Publications, Lda. https://doi.org/10.5220/0006883000310038

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free