A Dynamic Rectified Linear Activation Units

24Citations
Citations of this article
14Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

Deep neural network regression models produce substantial gains in big data prediction systems. Multilayer perceptron neural (MPL) networks have more various properties than single-layer feedforward neural networks. A deeper neural network is more intelligent and sophisticated, which is one of the main research directions. However, the disappearing gradient is the primary problem that restricts the research. The appropriate activation function is one of the effective methods for solving this problem. A bold idea about activation functions emerged: If the activation function is different in two adjacent training epochs, the probability of the same gradient value will be small. We proposed a novel activation function whose shape can be changed dynamically in training. Our experimental results show that this activation function with 'dynamic' characteristics can effectively avoid the disappearing gradient and can make the multilayer perceptron neural networks deeper.

Cite

CITATION STYLE

APA

Hu, X., Niu, P., Wang, J., & Zhang, X. (2019). A Dynamic Rectified Linear Activation Units. IEEE Access, 7, 180409–180416. https://doi.org/10.1109/ACCESS.2019.2959036

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free