IBM MNLP IE at CASE 2021 Task 2: NLI Reranking for Zero-Shot Text Classification

12Citations
Citations of this article
51Readers
Mendeley users who have this article in their library.

Abstract

Supervised models can achieve very high accuracy for fine-grained text classification. In practice, however, training data may be abundant for some types but scarce or even nonexistent for others. We propose a hybrid architecture that uses as much labeled data as available for fine-tuning classification models, while also allowing for types with little (fewshot) or no (zero-shot) labeled data. In particular, we pair a supervised text classification model with a Natural Language Inference (NLI) reranking model. The NLI reranker uses a textual representation of target types that allows it to score the strength with which a type is implied by a text, without requiring training data for the types. Experiments show that the NLI model is very sensitive to the choice of textual representation, but can be effective for classifying unseen types. It can also improve classification accuracy for the known types of an already highly accurate supervised model.1

Cite

CITATION STYLE

APA

Barker, K., Awasthy, P., Ni, J., & Florian, R. (2021). IBM MNLP IE at CASE 2021 Task 2: NLI Reranking for Zero-Shot Text Classification. In 4th Workshop on Challenges and Applications of Automated Extraction of Socio-Political Events from Text, CASE 2021 - Proceedings (pp. 193–202). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2021.case-1.24

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free