We consider the problem of learning a classifier when we dispose little training data from the target domain but abundant training data from several source domains. We make two contributions to the domain adaptation problem. First we extend the Nearest Class Mean (NCM) classifier by introducing for each class domain-dependent mean parameters as well as domain-specific weights. Second, we propose a generic adaptive semi-supervised metric learning technique that iteratively curates the training set by adding unlabeled samples with high prediction confidence and by removing labeled samples for which the prediction confidence is low. These two complementary techniques are evaluated on two public benchmarks: the ImageClef Domain Adaptation Challenge and the Office-CalTech datasets. Both contributions are shown to yield improvements and to be complementary to each other.
CITATION STYLE
Csurka, G., Chidlovskii, B., & Perronnin, F. (2015). Domain adaptation with a domain specific class means classifier. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 8927, pp. 32–46). Springer Verlag. https://doi.org/10.1007/978-3-319-16199-0_3
Mendeley helps you to discover research relevant for your work.