Learning domain invariant embeddings by matching distributions

5Citations
Citations of this article
2Readers
Mendeley users who have this article in their library.
Get full text

Abstract

One of the characteristics of the domain shift problem is that the source and target data have been drawn from different distributions. A natural approach to addressing this problem therefore consists of learning an embedding of the source and target data such that they have similar distributions in the new space. In this chapter, we study several methods that follow this approach. At the core of these methods lies the notion of distance between two distributions. We first discuss domain adaptation (DA) techniques that rely on the Maximum Mean Discrepancy to measure such a distance. We then study the use of alternative distribution distance measures within one specific Domain Adaptation framework. In this context, we focus on f-divergences, and in particular on the KL divergence and the Hellinger distance. Throughout the chapter, we evaluate the different methods and distance measures on the task of visual object recognition and compare them against related baselines on a standard DA benchmark dataset.

Cite

CITATION STYLE

APA

Baktashmotlagh, M., Harandi, M., & Salzmann, M. (2017). Learning domain invariant embeddings by matching distributions. In Advances in Computer Vision and Pattern Recognition (pp. 95–114). Springer London. https://doi.org/10.1007/978-3-319-58347-1_5

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free