Abstract
Data scarcity is a crucial issue for the development of highly multilingual NLP systems. Yet for many under-represented languages (ULs)-languages for which NLP research is particularly far behind in meeting user needs-it is feasible to annotate small amounts of data. Motivated by this, we propose XTREME-UP, a benchmark defined by: its focus on the scarce-data scenario rather than zero-shot; its focus on user-centric tasks-tasks with broad adoption by speakers of high-resource languages; and its focus on under-represented languages where this scarce-data scenario is most realistic. XTREME-UP evaluates the capabilities of language models across 88 under-represented languages over 9 key user-centric technologies including ASR, OCR, MT, and information access tasks that are of general utility. We create new datasets for OCR, autocomplete, question answering, semantic parsing, and transliteration, and build on and refine existing datasets for other tasks. XTREME-UP provides a methodology for evaluating many modeling scenarios including text-only, multi-modal (vision, audio, and text), supervised parameter tuning, and in-context learning. We evaluate commonly used models on the benchmark.
Cite
CITATION STYLE
Ruder, S., Clark, J. H., Gutkin, A., Kale, M., Ma, M., Nicosia, M., … Talukdar, P. (2023). XTREME-UP: A User-Centric Scarce-Data Benchmark for Under-Represented Languages. In Findings of the Association for Computational Linguistics: EMNLP 2023 (pp. 1856–1884). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2023.findings-emnlp.125
Register to see more suggestions
Mendeley helps you to discover research relevant for your work.