Aggregated learning: A vector-quantization approach to learning neural network classifiers

6Citations
Citations of this article
15Readers
Mendeley users who have this article in their library.

Abstract

We consider the problem of learning a neural network classifier. Under the information bottleneck (IB) principle, we associate with this classification problem a representation learning problem, which we call “IB learning”. We show that IB learning is, in fact, equivalent to a special class of the quantization problem. The classical results in rate-distortion theory then suggest that IB learning can benefit from a “vector quantization” approach, namely, simultaneously learning the representations of multiple input objects. Such an approach assisted with some variational techniques, result in a novel learning framework, “Aggregated Learning”, for classification with neural network models. In this framework, several objects are jointly classified by a single neural network. The effectiveness of this framework is verified through extensive experiments on standard image recognition and text classification tasks.

Cite

CITATION STYLE

APA

Soflaei, M., Guo, H., Al-Bashabsheh, A., Mao, Y., & Zhang, R. (2020). Aggregated learning: A vector-quantization approach to learning neural network classifiers. In AAAI 2020 - 34th AAAI Conference on Artificial Intelligence (pp. 5810–5817). AAAI press. https://doi.org/10.1609/aaai.v34i04.6038

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free