Depth-Adaptive Deep Neural Network Based on Learning Layer Relevance Weights

2Citations
Citations of this article
11Readers
Mendeley users who have this article in their library.

Abstract

In this paper, we propose two novel Adaptive Neural Network Approaches (ANNAs), which are intended to automatically learn the optimal network depth. In particular, the proposed class-independent and class-dependent ANNAs address two main challenges faced by typical deep learning paradigms. Namely, they overcome the problems of setting the optimal network depth and improving the model interpretability. Specifically, ANNA approaches simultaneously train the network model, learn the network depth in an unsupervised manner, and assign fuzzy relevance weights to each network layer to better decipher the model behavior. In addition, two novel cost functions were designed in order to optimize the layer fuzzy relevance weights along with the model hyper-parameters. The proposed ANNA approaches were assessed using standard benchmarking datasets and performance measures. The experiments proved their effectiveness compared to typical deep learning approaches, which rely on empirical tuning and scaling of the network depth. Moreover, the experimental findings demonstrated the ability of the proposed class-independent and class-dependent ANNAs to decrease the network complexity and build lightweight models for less overfitting risk and better generalization.

Cite

CITATION STYLE

APA

Alturki, A., Bchir, O., & Ben Ismail, M. M. (2023). Depth-Adaptive Deep Neural Network Based on Learning Layer Relevance Weights. Applied Sciences (Switzerland), 13(1). https://doi.org/10.3390/app13010398

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free