Modern iterations of deep learning models contain millions (billions) of unique parameters-each represented by a b-bit number. Popular attempts at compressing neural networks (such as pruning and quan-tisation) have shown that many of the parameters are superfluous, which we can remove (pruning) or express with b′
CITATION STYLE
Subia-Waud, C., & Dasmahapatra, S. (2022). Weight Fixing Networks. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 13671 LNCS, pp. 415–431). Springer Science and Business Media Deutschland GmbH. https://doi.org/10.1007/978-3-031-20083-0_25
Mendeley helps you to discover research relevant for your work.