Quantifying the Dialect Gap and its Correlates Across Languages

7Citations
Citations of this article
11Readers
Mendeley users who have this article in their library.

Abstract

Historically, researchers and consumers have noticed a decrease in quality when applying NLP tools to minority variants of languages (i.e. Puerto Rican Spanish or Swiss German), but studies exploring this have been limited to a select few languages. Additionally, past studies have mainly been conducted in a monolingual context, so cross-linguistic trends have not been identified and tied to external factors. In this work, we conduct a comprehensive evaluation of the most influential, state-of-the-art large language models (LLMs) across two high-use applications, machine translation and automatic speech recognition, to assess their functionality on the regional dialects of several high- and low-resource languages. Additionally, we analyze how the regional dialect gap is correlated with economic, social, and linguistic factors. The impact of training data, including related factors like dataset size and its construction procedure, is shown to be significant but not consistent across models or languages, meaning a one-size-fits-all approach cannot be taken in solving the dialect gap. This work will lay the foundation for furthering the field of dialectal NLP by laying out evident disparities and identifying possible pathways for addressing them through mindful data collection.

Cite

CITATION STYLE

APA

Kantharuban, A., Vulić, I., & Korhonen, A. (2023). Quantifying the Dialect Gap and its Correlates Across Languages. In Findings of the Association for Computational Linguistics: EMNLP 2023 (pp. 7226–7245). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2023.findings-emnlp.481

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free