Average Is Not Enough: Caveats of Multilingual Evaluation

1Citations
Citations of this article
17Readers
Mendeley users who have this article in their library.

Abstract

This position paper discusses the problem of multilingual evaluation. Using simple statistics, such as average language performance, might inject linguistic biases in favor of dominant language families into evaluation methodology. We argue that a qualitative analysis informed by comparative linguistics is needed for multilingual results to detect this kind of bias. We show in our case study that results in published works can indeed be linguistically biased and we demonstrate that visualization based on URIEL typological database can detect it.

Cite

CITATION STYLE

APA

Pikuliak, M., & Šimko, M. (2022). Average Is Not Enough: Caveats of Multilingual Evaluation. In MRL 2022 - 2nd Workshop on Multi-Lingual Representation Learning, Proceedings of the Workshop (pp. 125–133). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2022.mrl-1.13

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free