Investigating Ensemble Methods for Model Robustness Improvement of Text Classifiers

1Citations
Citations of this article
33Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Large pre-trained language models have shown remarkable performance over the past few years. These models, however, sometimes learn superficial features from the dataset and cannot generalize to the distributions that are dissimilar to the training scenario. There have been several approaches proposed to reduce model's reliance on these bias features which can improve model robustness in the out-of-distribution setting. However, existing methods usually use a fixed low-capacity model to deal with various bias features, which ignore the learnability of those features. In this paper, we analyze a set of existing bias features and demonstrate there is no single model that works best for all the cases. We further show that by choosing an appropriate bias model, we can obtain a better robustness result than baselines with a more sophisticated model design.

Cite

CITATION STYLE

APA

Zhao, J., Wang, X., Qin, Y., Chen, J., & Chang, K. W. (2022). Investigating Ensemble Methods for Model Robustness Improvement of Text Classifiers. In Findings of the Association for Computational Linguistics: EMNLP 2022 (pp. 1634–1640). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2022.findings-emnlp.461

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free