Out-of-domain (OOD) input detection is vital in a task-oriented dialogue system since the acceptance of unsupported inputs could lead to an incorrect response of the system. This paper proposes OutFlip, a method to generate out-of-domain samples using only in-domain training dataset automatically. A white-box natural language attack method HotFlip is revised to generate out-of-domain samples instead of adversarial examples. Our evaluation results showed that integrating OutFlip-generated out-of-domain samples into the training dataset could significantly improve an intent classification model's out-of-domain detection performance.
CITATION STYLE
Choi, D. H., Shin, M. C., Kim, E. G., & Shin, D. R. (2021). OutFlip: Generating Out-of-Domain Samples for Unknown Intent Detection with Natural Language Attack. In Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021 (pp. 504–512). Association for Computational Linguistics (ACL).
Mendeley helps you to discover research relevant for your work.