OutFlip: Generating Out-of-Domain Samples for Unknown Intent Detection with Natural Language Attack

ArXiv: 2105.05601
6Citations
Citations of this article
60Readers
Mendeley users who have this article in their library.

Abstract

Out-of-domain (OOD) input detection is vital in a task-oriented dialogue system since the acceptance of unsupported inputs could lead to an incorrect response of the system. This paper proposes OutFlip, a method to generate out-of-domain samples using only in-domain training dataset automatically. A white-box natural language attack method HotFlip is revised to generate out-of-domain samples instead of adversarial examples. Our evaluation results showed that integrating OutFlip-generated out-of-domain samples into the training dataset could significantly improve an intent classification model's out-of-domain detection performance.

Cite

CITATION STYLE

APA

Choi, D. H., Shin, M. C., Kim, E. G., & Shin, D. R. (2021). OutFlip: Generating Out-of-Domain Samples for Unknown Intent Detection with Natural Language Attack. In Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021 (pp. 504–512). Association for Computational Linguistics (ACL).

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free