SOTAVerified

Comparison of Syntactic and Semantic Representations of Programs in Neural Embeddings

2020-01-24Code Available3· sign in to hype

Austin P. Wright, Herbert Wiklicky

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Neural approaches to program synthesis and understanding have proliferated widely in the last few years; at the same time graph based neural networks have become a promising new tool. This work aims to be the first empirical study comparing the effectiveness of natural language models and static analysis graph based models in representing programs in deep learning systems. It compares graph convolutional networks using different graph representations in the task of program embedding. It shows that the sparsity of control flow graphs and the implicit aggregation of graph convolutional networks cause these models to perform worse than naive models. Therefore it concludes that simply augmenting purely linguistic or statistical models with formal information does not perform well due to the nuanced nature of formal properties introducing more noise than structure for graph convolutional networks.

Tasks

Reproductions