A Specific and Selective Neural Response Representation with Decorrelating Auto-Encoder

0Citations
Citations of this article
17Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

Since the pioneering report with an unsupervised pre-training principle was published, deep architectures, as a simulation of primary cortexes, have been intensively studied and successfully utilized in solving some recognition tasks. Motivated by that, herein, we propose a decorrelating regularity on auto-encoders, named decorrelating auto-encoder (DcA), which can be stacked to deep architectures, called the SDcA model. The learning algorithm is designed based on the principles of redundancy-reduction and the infomax, and a fine-tuning algorithm based on correlation detecting criteria. The property of our model is evaluated by auditory and handwriting recognition tasks with the TIMIT acoustic-phonetic continuous speech corpus and MNIST database. The results show that our model has a general advantage as compared with four existing models, especially in low levels, and when training samples are scarce our model put up stronger learning capacity and generalization.

Cite

CITATION STYLE

APA

Zhou, J., Chen, Q., Jiang, H., Cai, S., Shao, G., & Kikuchi, H. (2019). A Specific and Selective Neural Response Representation with Decorrelating Auto-Encoder. IEEE Access, 7, 70011–70020. https://doi.org/10.1109/ACCESS.2019.2918692

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free