Covariate shift by kernel mean matching

  • Gretton A
  • Smola A
  • Huang J
 et al. 
  • 113

    Readers

    Mendeley users who have this article in their library.
  • N/A

    Citations

    Citations of this article.

Abstract

Given sets of observations of training and test data, we consider the problem of re-weighting the training data such that its distribution more closely matches that\r
of the test data. We achieve this goal by matching covariate distributions between training and test sets in a high dimensional feature space (specifically, a reproducing\r
kernel Hilbert space). This approach does not require distribution estimation.\r
Instead, the sample weights are obtained by a simple quadratic programming procedure. We provide a uniform convergence bound on the distance between\r
the reweighted training feature mean and the test feature mean, a transductive bound on the expected loss of an algorithm trained on the reweighted data, and\r
a connection to single class SVMs. While our method is designed to deal with the case of simple covariate shift (in the sense of Chapter ??), we have also found\r
benefits for sample selection bias on the labels. Our correction procedure yields its greatest and most consistent advantages when the learning algorithm returns a\r
classifier/regressor that is \simpler" than the data might suggest.

Author-supplied keywords

  • Brain Computer Interfaces
  • Computational
  • Information-Theoretic Learning with
  • Learning/Statistics & Optimisation
  • Theory & Algorithms

Get free article suggestions today

Mendeley saves you time finding and organizing research

Sign up here
Already have an account ?Sign in

Find this document

Authors

  • Arthur Gretton

  • Alexander J Smola

  • Jiayuan Huang

  • Marcel Schmittfull

  • Karsten M Borgwardt

  • Bernhard Scholkopf

Cite this document

Choose a citation style from the tabs below

Save time finding and organizing research with Mendeley

Sign up for free