Ornstein auto-encoders

4Citations
Citations of this article
18Readers
Mendeley users who have this article in their library.
Get full text

Abstract

We propose the Ornstein auto-encoder (OAE), a representation learning model for correlated data. In many interesting applications, data have nested structures. Examples include the VGGFace and MNIST datasets. We view such data consist of i.i.d. copies of a stationary random process, and seek a latent space representation of the observed sequences. This viewpoint necessitates a distance measure between two random processes. We propose to use Orstein's d-bar distance, a process extension of Wasserstein's distance. We first show that the theorem by Bousquet et al. (2017) for Wasserstein auto-encoders extends to stationary random processes. This result, however, requires both encoder and decoder to map an entire sequence to another. We then show that, when exchangeability within a process, valid for VGGFace and MNIST, is assumed, these maps reduce to uni-variate ones, resulting in a much simpler, tractable optimization problem. Our experiments show that OAEs successfully separate individual sequences in the latent space, and can generate new variations of unknown, as well as known, identity. The latter has not been possible with other existing methods.

Cite

CITATION STYLE

APA

Choi, Y., & Won, J. H. (2019). Ornstein auto-encoders. In IJCAI International Joint Conference on Artificial Intelligence (Vol. 2019-August, pp. 2172–2178). International Joint Conferences on Artificial Intelligence. https://doi.org/10.24963/ijcai.2019/301

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free