Adversarial Alignment of Class Prediction Uncertainties for Domain Adaptation

2Citations
Citations of this article
27Readers
Mendeley users who have this article in their library.
Get full text

Abstract

We consider unsupervised domain adaptation: given labelled examples from a source domain and unlabelled examples from a related target domain, the goal is to infer the labels of target examples. Under the assumption that features from pre-trained deep neural networks are transferable across related domains, domain adaptation reduces to aligning source and target domain at class prediction uncertainty level. We tackle this problem by introducing a method based on adversarial learning which forces the label uncertainty predictions on the target domain to be indistinguishable from those on the source domain. Pre-trained deep neural networks are used to generate deep features having high transferability across related domains. We perform an extensive experimental analysis of the proposed method over a wide set of publicly available pre-trained deep neural networks. Results of our experiments on domain adaptation tasks for image classification show that class prediction uncertainty alignment with features extracted from pre-trained deep neural networks provides an efficient, robust and effective method for domain adaptation.

Author supplied keywords

Cite

CITATION STYLE

APA

Manders, J., van Laarhoven, T., & Marchiori, E. (2019). Adversarial Alignment of Class Prediction Uncertainties for Domain Adaptation. In International Conference on Pattern Recognition Applications and Methods (Vol. 1, pp. 221–231). Science and Technology Publications, Lda. https://doi.org/10.5220/0007519602210231

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free