SOTAVerified

Automatic Code Generation using Pre-Trained Language Models

2021-02-21Code Available1· sign in to hype

Luis Perez, Lizi Ottens, Sudharshan Viswanathan

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Recent advancements in natural language processing gpt2 BERT have led to near-human performance in multiple natural language tasks. In this paper, we seek to understand whether similar techniques can be applied to a highly structured environment with strict syntax rules. Specifically, we propose an end-to-end machine learning model for code generation in the Python language built on-top of pre-trained language models. We demonstrate that a fine-tuned model can perform well in code generation tasks, achieving a BLEU score of 0.22, an improvement of 46\% over a reasonable sequence-to-sequence baseline. All results and related code used for training and data processing are available on GitHub.

Tasks

Reproductions