Not So Fast, Classifier – Accuracy and Entropy Reduction in Incremental Intent Classification

1Citations
Citations of this article
39Readers
Mendeley users who have this article in their library.

Abstract

Incremental intent classification requires the assignment of intent labels to partial utterances. However, partial utterances do not necessarily contain enough information to be mapped to the intent class of their complete utterance (correctly and with a certain degree of confidence). Using the final interpretation as the ground truth to measure a classifier’s accuracy during intent classification of partial utterances is thus problematic. We release inCLINC, a dataset of partial and full utterances with human annotations of plausible intent labels for different portions of each utterance, as an upper (human) baseline for incremental intent classification. We analyse the incremental annotations and propose entropy reduction as a measure of human annotators’ convergence on an interpretation (i.e. intent label). We argue that, when the annotators do not converge to one or a few possible interpretations and yet the classifier already identifies the final intent class early on, it is a sign of overfitting that can be ascribed to artefacts in the dataset.

Cite

CITATION STYLE

APA

Hrycyk, L., Zarcone, A., & Hahn, L. (2021). Not So Fast, Classifier – Accuracy and Entropy Reduction in Incremental Intent Classification. In NLP for Conversational AI, NLP4ConvAI 2021 - Proceedings of the 3rd Workshop (pp. 52–67). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2021.nlp4convai-1.6

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free