Bayes at FigLang 2022 Euphemism Detection shared task: Cost-Sensitive Bayesian Fine-tuning and Venn-Abers Predictors for Robust Training under Class Skewed Distributions

1Citations
Citations of this article
15Readers
Mendeley users who have this article in their library.

Abstract

Transformers have achieved a state of the art performance across most natural language processing tasks. However, the performance of these models often degrades when being trained on data that exhibits skewed class distributions (class imbalance) common social media data. This is because training tends to be biased towards head classes that have majority of the data points. Most of the classical methods that have been proposed to handle this problem like re-sampling and re-weighting often suffer from unstable performance, poor applicability and poor calibration. In this paper, we propose to use Bayesian methods and Venn-Abers predictors for well calibrated and robust training against class imbalance. Our proposed approach improves f1−score over the baseline RoBERTa (A Robustly Optimized Bidirectional Embedding from Transformers Pretraining Approach) model by about 6 points (79.0% against 72.6%) when training with class imbalanced data.

Cite

CITATION STYLE

APA

Trust, P., Provia, K., & Omala, K. (2022). Bayes at FigLang 2022 Euphemism Detection shared task: Cost-Sensitive Bayesian Fine-tuning and Venn-Abers Predictors for Robust Training under Class Skewed Distributions. In FLP 2022 - 3rd Workshop on Figurative Language Processing, Proceedings of the Workshop (pp. 94–99). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2022.flp-1.13

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free