Adversarial-learned loss for domain adaptation

137Citations
Citations of this article
150Readers
Mendeley users who have this article in their library.

Abstract

Recently, remarkable progress has been made in learning transferable representation across domains. Previous works in domain adaptation are majorly based on two techniques: Domain-adversarial learning and self-training. However, domain-adversarial learning only aligns feature distributions between domains but does not consider whether the target features are discriminative. On the other hand, selftraining utilizes the model predictions to enhance the discrimination of target features, but it is unable to explicitly align domain distributions. In order to combine the strengths of these two methods, we propose a novel method called Adversarial-Learned Loss for Domain Adaptation (ALDA). We first analyze the pseudo-label method, a typical selftraining method. Nevertheless, there is a gap between pseudolabels and the ground truth, which can cause incorrect training. Thus we introduce the confusion matrix, which is learned through an adversarial manner in ALDA, to reduce the gap and align the feature distributions. Finally, a new loss function is auto-constructed from the learned confusion matrix, which serves as the loss for unlabeled target samples. Our ALDA outperforms state-of-the-art approaches in four standard domain adaptation datasets. Our code is available at https://github.com/ZJULearning/ALDA.

Cite

CITATION STYLE

APA

Chen, M., Zhao, S., Liu, H., & Cai, D. (2020). Adversarial-learned loss for domain adaptation. In AAAI 2020 - 34th AAAI Conference on Artificial Intelligence (pp. 3521–3528). AAAI press. https://doi.org/10.1609/aaai.v34i04.5757

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free