The conventional wisdom behind learning deep classification models is to focus on bad-classified examples and ignore well-classified examples that are far from the decision boundary. For instance, when training with cross-entropy loss, examples with higher likelihoods (i.e., well-classified examples) contribute smaller gradients in back-propagation. However, we theoretically show that this common practice hinders representation learning, energy optimization, and margin growth. To counteract this deficiency, we propose to reward well-classified examples with additive bonuses to revive their contribution to the learning process. This counterexample theoretically addresses these three issues. We empirically support this claim by directly verifying the theoretical results or significant performance improvement with our counterexample on diverse tasks, including image classification, graph classification, and machine translation. Furthermore, this paper shows that we can deal with complex scenarios, such as imbalanced classification, OOD detection, and applications under adversarial attacks, because our idea can solve these three issues. Code is available at https://github.com/lancopku/well-classified-examples-are-underestimated.
CITATION STYLE
Zhao, G., Yang, W., Ren, X., Li, L., Wu, Y., & Sun, X. (2022). Well-Classified Examples are Underestimated in Classification with Deep Neural Networks. In Proceedings of the 36th AAAI Conference on Artificial Intelligence, AAAI 2022 (Vol. 36, pp. 9180–9189). Association for the Advancement of Artificial Intelligence. https://doi.org/10.1609/aaai.v36i8.20904
Mendeley helps you to discover research relevant for your work.