Cross-Domain Empirical Risk Minimization for Unbiased Long-Tailed Classification

27Citations
Citations of this article
37Readers
Mendeley users who have this article in their library.

Abstract

We address the overlooked unbiasedness in existing long-tailed classification methods: we find that their overall improvement is mostly attributed to the biased preference of “tail” over “head”, as the test distribution is assumed to be balanced; however, when the test is as imbalanced as the long-tailed training data—let the test respect Zipf's law of nature—the “tail” bias is no longer beneficial overall because it hurts the “head” majorities. In this paper, we propose Cross-Domain Empirical Risk Minimization (xERM) for training an unbiased model to achieve strong performances on both test distributions, which empirically demonstrates that xERM fundamentally improves the classification by learning better feature representation rather than the “head vs. tail” game. Based on causality, we further theoretically explain why xERM achieves unbiasedness: the bias caused by the domain selection is removed by adjusting the empirical risks on the imbalanced domain and the balanced but unseen domain.

Cite

CITATION STYLE

APA

Zhu, B., Niu, Y., Hua, X. S., & Zhang, H. (2022). Cross-Domain Empirical Risk Minimization for Unbiased Long-Tailed Classification. In Proceedings of the 36th AAAI Conference on Artificial Intelligence, AAAI 2022 (Vol. 36, pp. 3589–3597). Association for the Advancement of Artificial Intelligence. https://doi.org/10.1609/aaai.v36i3.20271

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free