Domain adaptation aims at learning a predictive model that can generalize to a new target domain different from the source (training) domain. To mitigate the domain gap, adversarial training has been developed to learn domain invariant representations. State-of-the-art methods further make use of pseudo labels generated by the source domain classifier to match conditional feature distributions between the source and target domains. However, if the target domain is more complex than the source domain, the pseudo labels are unreliable to characterize the class-conditional structure of the target domain data, undermining prediction performance. To resolve this issue, we propose a Pairwise Similarity Regularization (PSR) approach that exploits cluster structures of the target domain data and minimizes the divergence between the pairwise similarity of clustering partition and that of pseudo predictions. Therefore, PSR guarantees that two target instances in the same cluster have the same class prediction and thus eliminate the negative effect of unreliable pseudo labels. Extensive experimental results show that our PSR method significantly boosts the current adversarial domain adaptation methods by a large margin on four visual benchmarks. In particular, PSR achieves a remarkable improvement of more than 5% over the state-of-the-art on several hard-to-transfer tasks.
CITATION STYLE
Wang, H., Yang, W., Wang, J., Wang, R., Lan, L., & Geng, M. (2020). Pairwise Similarity Regularization for Adversarial Domain Adaptation. In MM 2020 - Proceedings of the 28th ACM International Conference on Multimedia (pp. 2409–2418). Association for Computing Machinery, Inc. https://doi.org/10.1145/3394171.3413516
Mendeley helps you to discover research relevant for your work.