With the development of artificial intelligence technology, the demand for new digital security and privacy solutions is growing importantly. Inspired by the synaptic pruning in mammalian brains, we develop a network pruning method, called dynamic network pruning (DNP) method, which reduces the necessary number of free parameters in the convolutional neural networks. The DNP can be perfectly integrated with the gradient descent training and performed at any point, even multiple times, during training. We show that the pruning of connections (filters) is more intrinsic than the pruning of neurons (channels), and we relate the significance of filters to the dispersion of their values. With our proposed weight initialization technique, called smooth initialization (SI), unimportant filters can be easily identified based on the simple thresholds. The DNP method does not need pre-training to learn the connectivity of the network nor requires a very long time of fine-tuning to restore the performance. The DNP can also lead to better data privacy under distributed environment due to improved learning efficiency and convergency. The experiments show that our method outmatches several weight reduction methods in terms of reduction ratio and test accuracy on various models and datasets, and the generalization abilities of the pruned models are not damaged.
CITATION STYLE
Wu, L., Yue, H., Chen, P., Wu, D. A., & Jin, Q. (2019). A Novel Dynamic Network Pruning via Smooth Initialization and its Potential Applications in Machine Learning Based Security Solutions. IEEE Access, 7, 91667–91678. https://doi.org/10.1109/ACCESS.2019.2926993
Mendeley helps you to discover research relevant for your work.