Domain adaptation via identical distribution across models and tasks

1Citations
Citations of this article
2Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Deep convolution neural network (CNN) models with millions of parameters trained in large-scale datasets make domain adaptation difficult to be realized. In order to be applied for different application scenarios, various light weight network models have been proposed. These models perform well in large-scale datasets but are hard to train from randomly initialized weights when lack of data. Our framework is proposed to connect a pre-trained deep model with a light weight model by enforcing feature distributions of the two models being identical. It is proved in our work that knowledge in source model can be transferred to target light weight model by identical distribution loss. Meanwhile, distribution loss allows training dataset to utilize sparse labeled data in semi-supervised classification task. Moreover, distribution loss can be applied to large amount of unlabeled data from target domain. In the experiments, several standard benchmarks on domain adaptation are evaluated and our work gets state-of-the-art performance.

Cite

CITATION STYLE

APA

Wei, X., Chen, Y., & Su, J. (2018). Domain adaptation via identical distribution across models and tasks. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 11301 LNCS, pp. 226–237). Springer Verlag. https://doi.org/10.1007/978-3-030-04167-0_21

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free