Weight Fixing Networks

2Citations
Citations of this article
11Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Modern iterations of deep learning models contain millions (billions) of unique parameters-each represented by a b-bit number. Popular attempts at compressing neural networks (such as pruning and quan-tisation) have shown that many of the parameters are superfluous, which we can remove (pruning) or express with b′

Cite

CITATION STYLE

APA

Subia-Waud, C., & Dasmahapatra, S. (2022). Weight Fixing Networks. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 13671 LNCS, pp. 415–431). Springer Science and Business Media Deutschland GmbH. https://doi.org/10.1007/978-3-031-20083-0_25

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free