Fast Sparse Deep Neural Networks: Theory and Performance Analysis

4Citations
Citations of this article
14Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

In this paper, fast sparse deep neural networks that aim to offer an alternative way of learning in a deep structure are proposed. We examine some optimization algorithms for traditional deep neural networks and find that deep neural networks suffer from a time-consuming training process because of a large number of connecting parameters in layers and layers. To reduce time consumption, we propose fast sparse deep neural networks, which mainly consider the following two aspects in the design of the network. One is that the parameter learning at each hidden layer is given utilizing closed-form solutions, which is different from the BP algorithm with iterative updating strategy. Another aspect is that fast sparse deep neural networks use the summation method of a multi-layer linear approximation to estimate the output target, which is a different way from most deep neural network models. Unlike the traditional deep neural networks, fast sparse deep neural networks can achieve excellent generalization performance without fine-tuning. In addition, it is worth noting that fast sparse deep neural networks can also effectively overcome the shortcomings of the extreme learning machine and hierarchical extreme learning machine. Compared to the existing deep neural networks, enough experimental results on benchmark datasets demonstrate that the proposed model and optimization algorithms are feasible and efficient.

Cite

CITATION STYLE

APA

Zhao, J., & Jiao, L. (2019). Fast Sparse Deep Neural Networks: Theory and Performance Analysis. IEEE Access, 7, 74040–74055. https://doi.org/10.1109/ACCESS.2019.2920688

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free