Abstract
For the regression task in a non-parametric setting, designing the objective function to be minimized by the learner is a critical task. In this paper we propose a principled method for constructing and minimizing robust losses, which are resilient to errant observations even under small samples. Existing proposals typically utilize very strong estimates of the true risk, but in doing so require a priori information that is not available in practice. As we abandon direct approximation of the risk, this lets us enjoy substantial gains in stability at a tolerable price in terms of bias, all while circumventing the computational issues of existing procedures. We analyze existence and convergence conditions, provide practical computational routines, and also show empirically that the proposed method realizes superior robustness over wide data classes with no prior knowledge assumptions.
Author supplied keywords
Cite
CITATION STYLE
Holland, M. J., & Ikeda, K. (2017). Robust regression using biased objectives. Machine Learning, 106(9–10), 1643–1679. https://doi.org/10.1007/s10994-017-5653-5
Register to see more suggestions
Mendeley helps you to discover research relevant for your work.