Code generation from supervised code embeddings

9Citations
Citations of this article
4Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Code generation, which generates source code from natural language, is beneficial for constructing smarter Integrated Development Environments (IDEs), retrieving code more effectively and so on. Traditional approaches are based on matching similar code snippets, and recently researchers pay more attention to machine learning, especially the encoder-decoder framework. Faced with code generation, most encoder-decoder frameworks suffer from two drawbacks: (a) The length of the code snippet is always much longer than the length of its corresponding natural language, which makes it hard to align them, especially for encoders at word level; (b) Code snippets with the same functionality could be implemented in various ways, even completely different at word level. For drawback (a), we propose a new Supervised Code Embedding (SCE) model to promote the alignment between natural language and code. For drawback (b), with the help of Abstract Syntax Tree (AST), we propose a new distributed representation of code snippets which overcomes this drawback. To evaluate our approaches, we build a variant of the encoder-decoder model to generates code with the help of pre-trained code embedding. We perform experiments on several open source datasets. The experiment results indicate that our approaches are effective and outperform the state-of-the-art.

Cite

CITATION STYLE

APA

Hu, H., Chen, Q., & Liu, Z. (2019). Code generation from supervised code embeddings. In Communications in Computer and Information Science (Vol. 1142 CCIS, pp. 388–396). Springer. https://doi.org/10.1007/978-3-030-36808-1_42

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free