Semi-supervised Interactive Intent Labeling

6Citations
Citations of this article
56Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Building the Natural Language Understanding (NLU) modules of task-oriented Spoken Dialogue Systems (SDS) involves a definition of intents and entities, collection of task-relevant data, annotating the data with intents and entities, and then repeating the same process over and over again for adding any functionality/enhancement to the SDS. In this work, we showcase an Intent Bulk Labeling system where SDS developers can interactively label and augment training data from unlabeled utterance corpora using advanced clustering and visual labeling methods. We extend the Deep Aligned Clustering (Zhang et al., 2021) work with a better backbone BERT model, explore techniques to select the seed data for labeling, and develop a data balancing method using an oversampling technique that utilizes paraphrasing models. We also look at the effect of data augmentation on the clustering process. Our results show that we can achieve over 10% gain in clustering accuracy on some datasets using the combination of the above techniques. Finally, we extract utterance embeddings from the clustering model and plot the data to interactively bulk label the samples, reducing the time and effort for data labeling of the whole dataset significantly.

Cite

CITATION STYLE

APA

Sahay, S., Okur, E., Hakim, N., & Nachman, L. (2021). Semi-supervised Interactive Intent Labeling. In DaSH-LA 2021 - 2nd Workshop on Data Science with Human-in-the-Loop: Language Advances, Proceedings (pp. 31–40). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2021.dash-1.5

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free