MapFlow: latent transition via normalizing flow for unsupervised domain adaptation

1Citations
Citations of this article
10Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

Unsupervised domain adaptation (UDA) aims at enhancing the generalizability of the classification model learned from the labeled source domain to an unlabeled target domain. An established approach to UDA is to constrain the classifier on an intermediate representation that is distributionally invariant across domains. However, recent theoretical and empirical research has revealed that relying only on invariance fails to guarantee a small target error, thus making equality in the distribution of representations unnecessary. In this paper, we propose to relax invariant representation learning by finding a general relationship between the source and target representations, which allows an interchange of the more discriminative domain information. To this end, we formalize the MapFlow framework, which explicitly constructs an invertible mapping between the target encoded distribution and variationally induced source representation. Empirical results on public benchmark datasets show the desirable performance of our proposed algorithm compared to state-of-the-art methods.

Cite

CITATION STYLE

APA

Askari, H., Latif, Y., & Sun, H. (2023). MapFlow: latent transition via normalizing flow for unsupervised domain adaptation. Machine Learning, 112(8), 2953–2974. https://doi.org/10.1007/s10994-023-06357-2

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free