With pre-trained models, such as BERT, gaining more and more attention, plenty of research has been done to further promote their capabilities, from enhancing the experimental procedures (Sun et al., 2019) to improving the mathematical principles. In this paper, we propose a concise method for improving BERT's performance in text classification by utilizing a label embedding technique while keeping almost the same computational cost. Experimental results on six text classification benchmark datasets demonstrate its effectiveness.
CITATION STYLE
Xiong, Y., Feng, Y., Wu, H., Kamigaito, H., & Okumura, M. (2021). Fusing Label Embedding into BERT: An Efficient Improvement for Text Classification. In Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021 (pp. 1743–1750). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2021.findings-acl.152
Mendeley helps you to discover research relevant for your work.