Can Machines Read Coding Manuals Yet? - A Benchmark for Building Better Language Models for Code Understanding

3Citations
Citations of this article
8Readers
Mendeley users who have this article in their library.

Abstract

Code understanding is an increasingly important application of Artificial Intelligence. A fundamental aspect of understanding code is understanding text about code, e.g., documentation and forum discussions. Pre-trained language models (e.g., BERT) are a popular approach for various NLP tasks, and there are now a variety of benchmarks, such as GLUE, to help improve the development of such models for natural language understanding. However, little is known about how well such models work on textual artifacts about code, and we are unaware of any systematic set of downstream tasks for such an evaluation. In this paper, we derive a set of benchmarks (BLANCA - Benchmarks for LANguage models on Coding Artifacts) that assess code understanding based on tasks such as predicting the best answer to a question in a forum post, finding related forum posts, or predicting classes related in a hierarchy from class documentation. We evaluate performance of current state-of-the-art language models on these tasks and show that there is significant improvement on each task from fine tuning. We also show that multi-task training over BLANCA tasks help build better language models for code understanding.

Cite

CITATION STYLE

APA

Abdelaziz, I., Dolby, J., McCusker, J., & Srinivas, K. (2022). Can Machines Read Coding Manuals Yet? - A Benchmark for Building Better Language Models for Code Understanding. In Proceedings of the 36th AAAI Conference on Artificial Intelligence, AAAI 2022 (Vol. 36, pp. 4415–4423). Association for the Advancement of Artificial Intelligence. https://doi.org/10.1609/aaai.v36i4.20363

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free