Exploiting the Relationship between Pruning Ratio and Compression Effect for Neural Network Model Based on TensorFlow

3Citations
Citations of this article
6Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

Pruning is a method of compressing the size of a neural network model, which affects the accuracy and computing time when the model makes a prediction. In this paper, the hypothesis that the pruning proportion is positively correlated with the compression scale of the model but not with the prediction accuracy and calculation time is put forward. For testing the hypothesis, a group of experiments are designed, and MNIST is used as the data set to train a neural network model based on TensorFlow. Based on this model, pruning experiments are carried out to investigate the relationship between pruning proportion and compression effect. For comparison, six different pruning proportions are set, and the experimental results confirm the above hypothesis.

Cite

CITATION STYLE

APA

Liu, B., Wu, Q., Zhang, Y., Cao, Q., & Xu, X. (2020). Exploiting the Relationship between Pruning Ratio and Compression Effect for Neural Network Model Based on TensorFlow. Security and Communication Networks, 2020. https://doi.org/10.1155/2020/5218612

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free