Learnability for the information bottleneck

11Citations
Citations of this article
41Readers
Mendeley users who have this article in their library.

Abstract

The Information Bottleneck (IB) method provides an insightful and principled approach for balancing compression and prediction for representation learning. The IB objective I(X; Z)-βI(Y; Z) employs a Lagrange multiplier β to tune this trade-off. However, in practice, not only is b chosen empirically without theoretical guidance, there is also a lack of theoretical understanding between b, learnability, the intrinsic nature of the dataset and model capacity. In this paper, we show that if b is improperly chosen, learning cannot happen-the trivial representation P(Z|X) = P(Z) becomes the global minimum of the IB objective. We show how this can be avoided, by identifying a sharp phase transition between the unlearnable and the learnable which arises as β is varied. This phase transition defines the concept of IB-Learnability. We prove several sufficient conditions for IB-Learnability, which provides theoretical guidance for choosing a good β. We further show that IB-learnability is determined by the largest confident, typical and imbalanced subset of the examples (the conspicuous subset), and discuss its relation with model capacity. We give practical algorithms to estimate the minimum β for a given dataset. We also empirically demonstrate our theoretical conditions with analyses of synthetic datasets, MNIST and CIFAR10.

Cite

CITATION STYLE

APA

Wu, T., Fischer, I., Chuang, I. L., & Tegmark, M. (2019). Learnability for the information bottleneck. Entropy, 21(10). https://doi.org/10.3390/e21100924

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free