Stable prediction with model misspecification and agnostic distribution shift

99Citations
Citations of this article
87Readers
Mendeley users who have this article in their library.

Abstract

For many machine learning algorithms, two main assumptions are required to guarantee performance. One is that the test data are drawn from the same distribution as the training data, and the other is that the model is correctly specified. In real applications, however, we often have little prior knowledge on the test data and on the underlying true model. Under model misspecification, agnostic distribution shift between training and test data leads to inaccuracy of parameter estimation and instability of prediction across unknown test data. To address these problems, we propose a novel DecorrelatedWeighting Regression (DWR) algorithm which jointly optimizes a variable decorrelation regularizer and a weighted regression model. The variable decorrelation regularizer estimates a weight for each sample such that variables are decorrelated on the weighted training data. Then, these weights are used in the weighted regression to improve the accuracy of estimation on the effect of each variable, thus help to improve the stability of prediction across unknown test data. Extensive experiments clearly demonstrate that our DWR algorithm can significantly improve the accuracy of parameter estimation and stability of prediction with model misspecification and agnostic distribution shift.

Cite

CITATION STYLE

APA

Kuang, K., Xiong, R., Cui, P., Athey, S., & Li, B. (2020). Stable prediction with model misspecification and agnostic distribution shift. In AAAI 2020 - 34th AAAI Conference on Artificial Intelligence (pp. 4485–4492). AAAI press. https://doi.org/10.1609/aaai.v34i04.5876

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free