Error Detection in Large-Scale Natural Language Understanding Systems Using Transformer Models

4Citations
Citations of this article
60Readers
Mendeley users who have this article in their library.

Abstract

Large-scale conversational assistants like Alexa, Siri, Cortana and Google Assistant process every utterance using multiple models for domain, intent and named entity recognition. Given the decoupled nature of model development and large traffic volumes, it is extremely difficult to identify utterances processed erroneously by such systems. We address this challenge to detect domain classification errors using offline Transformer models. We combine utterance encodings from a RoBERTa model with the N-best hypothesis produced by the production system. We then fine-tune end-to-end in a multitask setting using a small dataset of human-annotated utterances with domain classification errors. We tested our approach for detecting misclassifications from one domain that accounts for <0.5% of the traffic in a large-scale conversational AI system. Our approach achieves an F1 score of 30% outperforming a bi-LSTM baseline by 16.9% and a standalone RoBERTa model by 4.8%. We improve this further by 2.2% to 32.2% by ensembling multiple models.

Cite

CITATION STYLE

APA

Chada, R., Natarajan, P., Fofadiya, D., & Ramachandra, P. (2021). Error Detection in Large-Scale Natural Language Understanding Systems Using Transformer Models. In Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021 (pp. 498–503). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2021.findings-acl.44

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free