We study and provide exposition to several phenomena that are related to the perceptron’s compression. One theme concerns modifications of the perceptron algorithm that yield better guarantees on the margin of the hyperplane it outputs. These modifications can be useful in training neural networks as well, and we demonstrate them with some experimental data. In a second theme, we deduce conclusions from the perceptron’s compression in various contexts.
CITATION STYLE
Moran, S., Nachum, I., Panasoff, I., & Yehudayoff, A. (2020). On the Perceptron’s Compression. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 12098 LNCS, pp. 310–325). Springer. https://doi.org/10.1007/978-3-030-51466-2_29
Mendeley helps you to discover research relevant for your work.