Abstract
Our community has improved the efficiency of deep learning applications by exploiting sparsity in inputs. Most of that work, though,is for inference, where weight sparsity is known statically, and/orfor specialized hardware. In this paper, we propose SparseTrain, asoftware-only scheme to leverage dynamic sparsity during trainingon general-purpose SIMD processors. SparseTrain exploits zerosintroduced by the ReLU activation function to both feature mapsand their gradients. Exploiting such sparsity is challenging becausethe sparsity degree is moderate and the locations of zeros changeover time.SparseTrain identifies zeros in a dense data representation andperforms vectorized computation. Variations of the scheme are applicable to all major components of training: forward propagation,backward propagation by inputs, and backward propagation byweights. Our experiments on a 6-core Intel Skylake-X server showthat SparseTrain is very effective. In end-to-end training of VGG16,ResNet-34, and ResNet-50 with ImageNet, SparseTrain outperformsa highly-optimized direct convolution on the non-initial convolutional layers by 2.19x, 1.37x, and 1.31x, respectively. SparseTrainalso benefits inference. It accelerates the non-initial convolutionallayers of the aforementioned models by 1.88x, 1.64x, and 1.44x,respectively.
Author supplied keywords
Cite
CITATION STYLE
Gong, Z., Ji, H., Fletcher, C. W., Hughes, C. J., & Torrellas, J. (2020). SparseTrain: Leveraging dynamic sparsity in software for training DNNs on general-purpose SIMD processors. In Parallel Architectures and Compilation Techniques - Conference Proceedings, PACT (pp. 279–292). Institute of Electrical and Electronics Engineers Inc. https://doi.org/10.1145/3410463.3414655
Register to see more suggestions
Mendeley helps you to discover research relevant for your work.