Statistical learning from biased training samples

1Citations
Citations of this article
12Readers
Mendeley users who have this article in their library.

Abstract

With the deluge of digitized information in the Big Data era, massive datasets are becoming increasingly available for learning predictive models. However, in many practical situations, the poor control of the data acquisition processes may naturally jeopardize the outputs of machine learning algorithms, and selection bias issues are now the subject of much attention in the literature. The present article investigates how to extend Empirical Risk Minimization, the principal paradigm in statistical learning, when training observations are generated from biased models, i.e., from distributions that are different from that in the test/prediction stage, and absolutely continuous with respect to the latter. Precisely, we show how to build a “nearly debiased” training statistical population from biased samples and the related biasing functions, following in the footsteps of the approach originally proposed in [46]. Furthermore, we study from a nonasymptotic perspective the performance of minimizers of an empirical version of the risk computed from the statistical population thus created. Remarkably, the learning rate achieved by this procedure is of the same order as that attained in absence of selection bias. Beyond the theoretical guarantees, we also present experimental results supporting the relevance of the algorithmic approach promoted in this paper.

Cite

CITATION STYLE

APA

Clémençon, S., & Laforgue, P. (2022). Statistical learning from biased training samples. Electronic Journal of Statistics, 16(2), 6086–6134. https://doi.org/10.1214/22-EJS2084

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free