SOTAVerified

Coder Reviewer Reranking for Code Generation

2022-11-29Code Available1· sign in to hype

Tianyi Zhang, Tao Yu, Tatsunori B. Hashimoto, Mike Lewis, Wen-tau Yih, Daniel Fried, Sida I. Wang

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Sampling diverse programs from a code language model and reranking with model likelihood is a popular method for code generation but it is prone to preferring degenerate solutions. Inspired by collaborative programming, we propose Coder-Reviewer reranking. We augment Coder language models from past work, which generate programs given language instructions, with Reviewer models, which evaluate the likelihood of the instruction given the generated programs. We perform an extensive study across six datasets with eight models from three model families. Experimental results show that Coder-Reviewer reranking leads to consistent and significant improvement (up to 17% absolute accuracy gain) over reranking with the Coder model only. When combined with executability filtering, Coder-Reviewer reranking can often outperform the minimum Bayes risk method. Coder-Reviewer reranking is easy to implement by prompting, can generalize to different programming languages, and works well with off-the-shelf hyperparameters.

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
MBPPcode-davinci-002 175B + ReviewerAccuracy66.9Unverified
MBPPcode-davinci-002 175B + Coder-ReviewerAccuracy66.4Unverified
MBPPcode-davinci-002 175B + MBR-ExecAccuracy63Unverified
MBPPcode-cushman-001 12B + MBR-ExecAccuracy48.3Unverified
MBPPCodeGen 16B + MBR-ExecAccuracy47.3Unverified
MBPPCodeGen 16B + Coder-ReviewerAccuracy46.2Unverified
MBPPCodeGen 16B + ReviewerAccuracy44.1Unverified
MBPPInCoder 6.7B + MBR-ExecAccuracy26.7Unverified
MBPPInCoder 6.7B + Coder-ReviewerAccuracy26.1Unverified
MBPPInCoder 6.7B + ReviewerAccuracy24.4Unverified

Reproductions