Benchmarking Language-agnostic Intent Classification for Virtual Assistant Platforms

4Citations
Citations of this article
32Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Current virtual assistant (VA) platforms are beholden to the limited number of languages they support. Every component, such as the tokenizer and intent classifier, is engineered for specific languages in these intricate platforms. Thus, supporting a new language in such platforms is a resource-intensive operation requiring expensive re-training and re-designing. In this paper, we propose a benchmark for evaluating language-agnostic intent classification, the most critical component of VA platforms. To ensure the benchmarking is challenging and comprehensive, we include 29 public and internal datasets across 10 low-resource languages and evaluate various training and testing settings with consideration of both accuracy and training time. The benchmarking result shows that Watson Assistant, among 7 commercial VA platforms and pre-trained multilingual language models (LMs), demonstrates close-to-best accuracy with the best accuracy-training time trade-off.

Cite

CITATION STYLE

APA

Wang, G., Qian, C., Pan, L., Qi, H., Kunc, L., & Potdar, S. (2022). Benchmarking Language-agnostic Intent Classification for Virtual Assistant Platforms. In MIA 2022 - Workshop on Multilingual Information Access, Proceedings of the Workshop (pp. 69–76). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2022.mia-1.7

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free