Text classification based on ReLU activation function of SAE algorithm

16Citations
Citations of this article
12Readers
Mendeley users who have this article in their library.
Get full text

Abstract

In order to solve the deep self-coding neural network training process, the Sigmoid function back-propagation gradient is easy to disappear, a method based on ReLU activation function is proposed for training the self coding neural network. This paper analyzes the performance of different activation functions and comparing ReLU with traditional Tanh and Sigmoid activation function and in Reuters-21578 standard for experiments on the test set. The experimental results show that using ReLU as the activation function, not only can improve the network convergence speed, and can also improve the accuracy.

Cite

CITATION STYLE

APA

Cui, J. L., Qiu, S., Jiang, M. Y., Pei, Z. L., & Lu, Y. N. (2017). Text classification based on ReLU activation function of SAE algorithm. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 10261 LNCS, pp. 44–50). Springer Verlag. https://doi.org/10.1007/978-3-319-59072-1_6

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free