Are Pre-trained Transformers Robust in Intent Classification? A Missing Ingredient in Evaluation of Out-of-Scope Intent Detection

10Citations
Citations of this article
49Readers
Mendeley users who have this article in their library.

Abstract

Pre-trained Transformer-based models were reported to be robust in intent classification. In this work, we first point out the importance of in-domain out-of-scope detection in few-shot intent recognition tasks and then illustrate the vulnerability of pre-trained Transformer-based models against samples that are in-domain but out-of-scope (ID-OOS). We construct two new datasets, and empirically show that pre-trained models do not perform well on both ID-OOS examples and general out-of-scope examples, especially on fine-grained few-shot intent detection tasks. To figure out how the models mistakenly classify ID-OOS intents as in-scope intents, we further conduct analysis on confidence scores and the overlapping keywords, as well as point out several prospective directions for future work. Resources are available at https://github.com/jianguoz/Few-Shot-Intent-Detection.

Cite

CITATION STYLE

APA

Zhang, J., Hashimoto, K., Wan, Y., Liu, Z., Liu, Y., Xiong, C., & Yu, P. S. (2022). Are Pre-trained Transformers Robust in Intent Classification? A Missing Ingredient in Evaluation of Out-of-Scope Intent Detection. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (pp. 12–20). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2022.nlp4convai-1.2

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free