Reiterative Domain Aware Multi-target Adaptation

0Citations
Citations of this article
7Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Multi-Target Domain Adaptation (MTDA) is a recently popular powerful setting in which a single classifier is learned for multiple unlabeled target domains. A popular MTDA approach is to sequentially adapt one target domain at a time. While only one pass is made through each target domain, the adaptation process for each target domain may consist of many iterations. Inspired by the spaced learning in neuroscience, we instead propose a reiterative approach where we make several passes/reiterations through each target domain. This leads to a better episodic learning that effectively retains features for multiple targets. The reiterative approach does not increase total number of training iterations, as we simply decrease the number of iterations per domain per reiteration. To build a multi-target classifier, it is also important to have a backbone feature extractor that generalizes well across domains. Towards this, we adopt Transformer as a feature extraction backbone. We perform extensive experiments on three popular MTDA datasets: Office-Home, Office-31, and DomainNet, a large-scale dataset. Our experiments separately show the benefits of both reiterative approach and superior Transformer-based feature extractor backbone.

Cite

CITATION STYLE

APA

Saha, S., Zhao, S., Sheikh, N., & Zhu, X. X. (2022). Reiterative Domain Aware Multi-target Adaptation. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 13485 LNCS, pp. 68–84). Springer Science and Business Media Deutschland GmbH. https://doi.org/10.1007/978-3-031-16788-1_5

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free