Iterative discriminative domain adaptation

0Citations
Citations of this article
2Readers
Mendeley users who have this article in their library.
Get full text

Abstract

A popular formulation of domain adaptation (DA) is to simultaneously minimize the source risk and the cross-domain discrepancy between the source domain Ds and target domain Dt. However, this is believed to be suboptimal since the shared feature, which is indistinguishable by a domain classifier, could be far from optimum for the purpose of classification. In this paper, we propose an iterative DA framework for directly optimizing the classification error, which provides DA solutions to both unsupervised and semi-supervised scenarios. Instead of directly attacking Ds→Dt, we employ an iterative self-training approach of Ds+Dl−1t→Dlt for progressively-labelling of Dt with the aim of   liml→∞Dlt≈Dt. For unsupervised DA, it performs comparable to the state-of-the-art DA methods. In particular, it performs the best among various unsupervised DA methods for the very difficult task MNIST → SVHN. By employing a few labeled samples in the target domain, we show that it can achieve significantly improved performance. For MNIST → SVHN, the use of 60 labeled samples from SVHN is able to improve the accuracy margin about +10% over the state-or-the-art unsupervised DA method. For a comparison with semi-supervised learning methods, it achieves the accuracy margin about +30% over Mean Teacher with 60 labeled samples in SVHN.

Cite

CITATION STYLE

APA

Wu, X., Fu, J., Zhang, S., & Zhou, Q. (2019). Iterative discriminative domain adaptation. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 11857 LNCS, pp. 349–360). Springer. https://doi.org/10.1007/978-3-030-31654-9_30

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free