Striking a Balance: Alleviating Inconsistency in Pre-trained Models for Symmetric Classification Tasks

5Citations
Citations of this article
30Readers
Mendeley users who have this article in their library.

Abstract

While fine-tuning pre-trained models for downstream classification is the conventional paradigm in NLP, often task-specific nuances may not get captured in the resultant models. Specifically, for tasks that take two inputs and require the output to be invariant of the order of the inputs, inconsistency is often observed in the predicted labels or confidence scores. We highlight this model shortcoming and apply a consistency loss function to alleviate inconsistency in symmetric classification. Our results show an improved consistency in predictions for three paraphrase detection datasets without a significant drop in the accuracy scores. We examine the classification performance of six datasets (both symmetric and non-symmetric) to showcase the strengths and limitations of our approach.

Cite

CITATION STYLE

APA

Kumar, A., & Joshi, A. (2022). Striking a Balance: Alleviating Inconsistency in Pre-trained Models for Symmetric Classification Tasks. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (pp. 1887–1895). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2022.findings-acl.148

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free