On the Thresholding Strategy for Infrequent Labels in Multi-label Classification

1Citations
Citations of this article
6Readers
Mendeley users who have this article in their library.
Get full text

Abstract

In multi-label classification, the imbalance between labels is often a concern. For a label that seldom occurs, the default threshold used to generate binarized predictions of that label is usually sub-optimal. However, directly tuning the threshold to optimize F-measure has been observed to overfit easily. In this work, we explain why this overfitting occurs. Then, we analyze the FBR heuristic, a previous technique proposed to address the overfitting issue. We explain its success but also point out some problems unobserved before. Then, we first propose a variant of the FBR heuristic that not only fixes the problems but is also more justifiable. Second, we propose a new technique based on smoothing the F-measure when tuning the threshold. We theoretically prove that, with proper parameters, smoothing results in desirable properties of the tuned threshold. Based on the idea of smoothing, we then propose jointly optimizing micro-F and macro-F as a lightweight alternative free from extra hyperparameters. Our methods are empirically evaluated on text and node classification datasets. The results show that our methods consistently outperform the FBR heuristic.

Cite

CITATION STYLE

APA

Lin, Y. J., & Lin, C. J. (2023). On the Thresholding Strategy for Infrequent Labels in Multi-label Classification. In International Conference on Information and Knowledge Management, Proceedings (pp. 1441–1450). Association for Computing Machinery. https://doi.org/10.1145/3583780.3614996

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free