SOTAVerified

Analyzing CodeBERT's Performance on Natural Language Code Search

2022-01-16ACL ARR January 2022Unverified0· sign in to hype

Anonymous

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

Large language models such as CodeBERT perform very well on tasks such as natural language code search. We show that this is most likely due to the high token overlap and similarity between the queries and the code in datasets obtained from large codebases, rather than any deeper understanding of the syntax or semantics of the query or code.

Tasks

Reproductions