Code to Comment Translation: A Comparative Study on Model Effectiveness & Errors

5Citations
Citations of this article
60Readers
Mendeley users who have this article in their library.

Abstract

Automated source code summarization is a popular software engineering research topic wherein machine translation models are employed to “translate” code snippets into relevant natural language descriptions. Most evaluations of such models are conducted using automatic reference-based metrics. However, given the relatively large semantic gap between programming languages and natural language, we argue that this line of research would benefit from a qualitative investigation into the various error modes of current state-of-the-art models. Therefore, in this work, we perform both a quantitative and qualitative comparison of three recently proposed source code summarization models. In our quantitative evaluation, we compare the models based on the smoothed BLEU-4, METEOR, and ROUGE-L machine translation metrics, and in our qualitative evaluation, we perform a manual open-coding of the most common errors committed by the models when compared to ground truth captions. Our investigation reveals new insights into the relationship between metric-based performance and model prediction errors grounded in an empirically derived error taxonomy that can be used to drive future research efforts.

Cite

CITATION STYLE

APA

Mahmud, J., Faisal, F., Arnob, R. I., Anastasopoulos, A., & Moran, K. (2021). Code to Comment Translation: A Comparative Study on Model Effectiveness & Errors. In NLP4Prog 2021 - 1st Workshop on Natural Language Processing for Programming, Proceedings of the Workshop (pp. 1–16). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2021.nlp4prog-1.1

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free