Harnessing the Power of Beta Scoring in Deep Active Learning for Multi-Label Text Classification

1Citations
Citations of this article
5Readers
Mendeley users who have this article in their library.

Abstract

Within the scope of natural language processing, the domain of multi-label text classification is uniquely challenging due to its expansive and uneven label distribution. The complexity deepens due to the demand for an extensive set of annotated data for training an advanced deep learning model, especially in specialized fields where the labeling task can be labor-intensive and often requires domain-specific knowledge. Addressing these challenges, our study introduces a novel deep active learning strategy, capitalizing on the Beta family of proper scoring rules within the Expected Loss Reduction framework. It computes the expected increase in scores using the Beta Scoring Rules, which are then transformed into sample vector representations. These vector representations guide the diverse selection of informative samples, directly linking this process to the model's expected proper score. Comprehensive evaluations across both synthetic and real datasets reveal our method's capability to often outperform established acquisition techniques in multi-label text classification, presenting encouraging outcomes across various architectural and dataset scenarios.

Cite

CITATION STYLE

APA

Tan, W., Nguyen, N. D., Du, L., & Buntine, W. (2024). Harnessing the Power of Beta Scoring in Deep Active Learning for Multi-Label Text Classification. In Proceedings of the AAAI Conference on Artificial Intelligence (Vol. 38, pp. 15240–15248). Association for the Advancement of Artificial Intelligence. https://doi.org/10.1609/aaai.v38i14.29447

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free