MCoNaLa: A Benchmark for Code Generation from Multiple Natural Languages

15Citations
Citations of this article
35Readers
Mendeley users who have this article in their library.
Get full text

Abstract

While there has been a recent burgeoning of applications at the intersection of natural and programming languages, such as code generation and code summarization, these applications are usually English-centric. This creates a barrier for program developers who are not proficient in English. To mitigate this gap in technology development across languages, we propose a multilingual dataset, MCoNaLa, to benchmark code generation from natural language commands extending beyond English. Modeled off of the methodology from the English Code/Natural Language Challenge (CoNaLa) dataset, we annotated a total of 896 NL-Code pairs in three languages: Spanish, Japanese, and Russian. We present a systematic evaluation on MCoNaLa by testing state-of-the-art code generation systems. Although the difficulties vary across three languages, all systems lag significantly behind their English counterparts, revealing the challenges in adapting code generation to new languages.

Cite

CITATION STYLE

APA

Wang, Z., Cuenca, G., Zhou, S., Xu, F. F., & Neubig, G. (2023). MCoNaLa: A Benchmark for Code Generation from Multiple Natural Languages. In EACL 2023 - 17th Conference of the European Chapter of the Association for Computational Linguistics, Findings of EACL 2023 (pp. 265–273). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2023.findings-eacl.20

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free