This paper generalizes the Maurer--Pontil framework of finite-dimensional lossy coding schemes to the setting where a high-dimensional random vector is mapped to an element of a compact set of latent representations in a lower-dimensional Euclidean space, and the reconstruction map belongs to a given class of nonlinear maps. Under this setup, which encompasses a broad class of unsupervised representation learning problems, we establish a connection to approximate generative modeling under structural constraints using the tools from the theory of optimal transportation. Next, we consider problem of learning a coding scheme on the basis of a finite collection of training samples and present generalization bounds that hold with high probability. We then illustrate the general theory in the setting where the reconstruction maps are implemented by deep neural nets.
CITATION STYLE
Lee, J., & Raginsky, M. (2019). Learning Finite-Dimensional Coding Schemes with Nonlinear Reconstruction Maps. SIAM Journal on Mathematics of Data Science, 1(3), 617–642. https://doi.org/10.1137/18m1234461
Mendeley helps you to discover research relevant for your work.