Learning general transformations of data for out-of-sample extensions

0Citations
Citations of this article
11Readers
Mendeley users who have this article in their library.
Get full text

Abstract

While generative models such as GANs have been successful at mapping from noise to specific distributions of data, or more generally from one distribution of data to another, they cannot isolate the transformation that is occurring and apply it to a new distribution not seen in training. Thus, they memorize the domain of the transformation, and cannot generalize the transformation out of sample. To address this, we propose a new neural network called a Neuron Transformation Network (NT-Net) that isolates the signal representing the transformation itself from the other signals representing internal distribution variation. This signal can then be removed from a new dataset distributed differently from the original one trained on. We demonstrate the effectiveness of our NTNet on more than a dozen synthetic and biomedical single-cell RNA sequencing datasets, where the NTNet is able to learn the data transformation performed by genetic and drug perturbations on one sample of cells and successfully apply it to another sample of cells to predict treatment outcome.

Cite

CITATION STYLE

APA

Amodio, M., Van Dijk, D., Wolf, G., & Krishnaswamy, S. (2020). Learning general transformations of data for out-of-sample extensions. In IEEE International Workshop on Machine Learning for Signal Processing, MLSP (Vol. 2020-September). IEEE Computer Society. https://doi.org/10.1109/MLSP49062.2020.9231660

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free