DNNs as layers of cooperating classifiers

7Citations
Citations of this article
25Readers
Mendeley users who have this article in their library.

Abstract

A robust theoretical framework that can describe and predict the generalization ability of DNNs in general circumstances remains elusive. Classical attempts have produced complexity metrics that rely heavily on global measures of compactness and capacity with little investigation into the effects of sub-component collaboration. We demonstrate intriguing regularities in the activation patterns of the hidden nodes within fully-connected feedforward networks. By tracing the origin of these patterns, we show how such networks can be viewed as the combination of two information processing systems: One continuous and one discrete.We describe how these two systems arise naturally from the gradient-based optimization process, and demonstrate the classification ability of the two systems, individually and in collaboration. This perspective on DNN classification offers a novel way to think about generalization, in which different subsets of the training data are used to train distinct classifiers; those classifiers are then combined to perform the classification task, and their consistency is crucial for accurate classification.

Cite

CITATION STYLE

APA

Davel, M. H., Theunissen, M. W., Pretorius, A. M., & Barnard, E. (2020). DNNs as layers of cooperating classifiers. In AAAI 2020 - 34th AAAI Conference on Artificial Intelligence (pp. 3725–3732). AAAI press. https://doi.org/10.1609/aaai.v34i04.5782

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free