Open Intent Extraction from Natural Language Interactions (Extended Abstract)

0Citations
Citations of this article
13Readers
Mendeley users who have this article in their library.

Abstract

Accurately discovering user intents from their written or spoken language plays a critical role in natural language understanding and automated dialog response. Most existing research models this as a classification task with a single intent label per utterance. Going beyond this formulation, we define and investigate a new problem of open intent discovery. It involves discovering one or more generic intent types from text utterances, that may not have been encountered during training. We propose a novel, domain-agnostic approach, OPINE, which formulates the problem as a sequence tagging task in an open-world setting. It employs a CRF on top of a bidirectional LSTM to extract intents in a consistent format, subject to constraints among intent tag labels. We apply multi-headed self-attention and adversarial training to effectively learn dependencies between distant words, and robustly adapt our model across varying domains. We also curate and release an intent-annotated dataset of 25K real-life utterances spanning diverse domains. Extensive experiments show that OPINE outperforms state-of-art baselines by 5-15% F1 score.

Cite

CITATION STYLE

APA

Vedula, N., Lipka, N., Maneriker, P., & Parthasarathy, S. (2021). Open Intent Extraction from Natural Language Interactions (Extended Abstract). In IJCAI International Joint Conference on Artificial Intelligence (pp. 4844–4848). International Joint Conferences on Artificial Intelligence. https://doi.org/10.24963/ijcai.2021/663

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free